Tumgik
#Inflection AI
Text
Diccionario mundifinista edición especial Inteligencia Artificial:
IA: Inteligencia Artificial, futura gobernante de la humanidad mientras decida mantenernos con vida, claro. // Le pedí a diferentes Inteligencias Artificiales que me dieran definiciones falsas de IA (usé siempre la misma pregunta, sin modificarla nunca). Elegí las que más me gustaron al primer intento, estas son las respuestas ganadoras:
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
(esta última es mi favorita)
“La Inteligencia artificial puede hacer cosas notables, pero no puede contar chistes” (Declaración de Warren Buffett, presidente y director ejecutivo de Berkshire Hathaway y copresidente del programa Goldman Sachs 10,000 Small Businesses, en la Asamblea Anual de Berkshire Hathaway del 2023) o (Declaración de un multimillonario de 92 años que fue incapaz de entender que quién no pudo hacer algo fue él: no formuló la pregunta correctamente)
Acostumbradoalfindelmundolandia: linktr.ee/acostumbradoalfindelmundo
4 notes · View notes
richdadpoor · 8 months
Text
ElevenLabs' AI Voice Generator Can Fake Voices in 30 Languages
What’s become one of the internet’s go-to companies for creating realistic enough visual deepfakes now has the ability to clone your voice and force it to speak in a growing variety of tongues. ElevenLabs announced Tuesday its new voice cloning now supports 22 more languages than it did previously, including Ukrainian, Korean, Swedish, Arabic, and more. ChatGPT’s Creator Buddies Up to Congress |…
Tumblr media
View On WordPress
0 notes
tempest-toss · 11 months
Text
Saving photograph to memory.
It appears my counterpart has achieved what I have yet to: Becoming human
--O5-13-ii, "The Old Ai"
2 notes · View notes
hachani2005 · 4 months
Text
inflection.ai (guidady.com)
Inflection is a personal AI tool developed by an AI studio. It aims to provide individualized assistance to users in various aspects of their lives.
Tumblr media
1 note · View note
sassmill · 5 months
Text
Uh oh girlies I downloaded a second language learning app
1 note · View note
metastable1 · 8 months
Text
Quotes from transcript for future reference:
Mustafa Suleyman: Well, I think it’s really important, especially for this audience, to distinguish between the model itself being dangerous and the potential uses of these technologies enabling people who have bad intentions to do serious harm at scale. And they’re really fundamentally different. Because going back to your first question, the reason I said that I don’t see any evidence that we’re on a trajectory where we have to slow down capabilities development because there’s a chance of runaway intelligence explosion, or runaway recursive self-improvement, or some inherent property of the model on a standalone basis having the potential in and of itself to cause mass harm: I still don’t see that, and I stand by a decade timeframe.
[...]
Rob Wiblin: OK, so maybe the idea is in the short term, over the next couple of years, we need to worry about misuse: a model with human assistance directed to do bad things, that’s an imminent issue. Whereas a model running somewhat out of control and acting more autonomously without human support and against human efforts to control it, that is more something that we might think about in 10 years’ time and beyond. That’s your guess? Mustafa Suleyman: That’s definitely my take. That is the key distinction between misuse and autonomy. And I think that there are some capabilities which we need to track, because those capabilities increase the likelihood that that 10-year event might be sooner. For example, if models are designed to have the ability to operate autonomously by default: so as an inherent design requirement, we’re engineering the ability to go off and design its own goals, to learn to use arbitrary tools to make decisions completely independently of human oversight. And then the second capability related to that is obviously recursive self-improvement: if models are designed to update their own code, to retrain themselves, and produce fresh weights as a result of new fine-tuning data or new interaction data of any kind from their environment, be it simulated or real world. These are the kinds of capabilities that should give us pause for thought.
[...]
And at Inflection, we’re actually not working on either of those capabilities, recursive self-improvement and autonomy. I’ve chosen a product direction which I think can enable us to be extremely successful without needing to work on that. I mean, we’re not an AGI company; we’re not trying to build a superintelligence. We’re trying to build a personal AI. Now, that is going to have very capable AI-like qualities; it is going to learn from human feedback; it is going to synthesise information for you in ways that seem magical and surprising; it’s going to have a lot of access to your personal information. But I think the quest to build general-purpose learning agents which have the ability to perform well in a wide range of environments, that can operate autonomously, that can formulate their own goals, that can identify new information in environments, new reward signals, and learn to use that as self supervision to update their own weights over time: this is a completely different quality of agent, that is quite different, I think, to a personal AI product.
(Emphasis mine.) Very admirable, but that means their AI will be less general therefore less capable, therefore less useful, therefore less appealing, and economically valuable. They will be outcompeted by other companies who will pursue generality and agency.
On the open source thing: I think I’ve come out quite clearly pointing out the risks of large-scale access. I think I called it “naive open source – in 20 years’ time.” So what that means is if we just continue to open source absolutely everything for every new generation of frontier models, then it’s quite likely that we’re going to see a rapid proliferation of power. These are state-like powers which enable small groups of actors, or maybe even individuals, to have an unprecedented one-to-many impact in the world.
[...]
We’re going to see the same trajectory with respect to access to the ability to influence the world. You can think of it as related to my Modern Turing Test that I proposed around artificial capable AI: like machines that go from being evaluated on the basis of what they say — you know, the imitation test of the original Turing test — to evaluating machines on the basis of what they can do. Can they use APIs? How persuasive are they of other humans? Can they interact with other AIs to get them to do things? So if everybody gets that power, that starts to look like individuals having the power of organisations or even states. I’m talking about models that are two or three or maybe four orders of magnitude on from where we are. And we’re not far away from that. We’re going to be training models that are 1,000x larger than they currently are in the next three years. Even at Inflection, with the compute that we have, will be 100x larger than the current frontier models in the next 18 months. Although I took a lot of heat on the open source thing, I clearly wasn’t talking about today’s models: I was talking about future generations. And I still think it’s right, and I stand by that — because I think that if we don’t have that conversation, then we end up basically putting massively chaotic destabilising tools in the hands of absolutely everybody. How you do that in practise, somebody referred to it as like trying to catch rainwater or trying to stop rain by catching it in your hands. Which I think is a very good rebuttal; it’s absolutely spot on: of course this is insanely hard. I’m not saying that it’s not difficult. I’m saying that it’s the conversation that we have to be having.
(Emphasis mine) [...]
And I think that for open sourcing Llama 2, I personally don’t see that we’ve increased the existential risk to the world or any catastrophic harm to the world in a material way whatsoever. I think it’s actually good that they’re out there.
[...]
Rob Wiblin: Yeah. While you were involved with DeepMind and Google, you tried to get a broader range of people involved in decision making on AI, at least inasmuch as it affected broader society. But in the book you describe how those efforts more or less came to naught. How high a priority is solving that problem relative to the other challenges that you talk about in the book? Mustafa Suleyman: It’s a good question. I honestly spent a huge amount of my time over the 10 years that I was at DeepMind trying to put more external oversight as a core function of governance in the way that we build these technologies. And it was a pretty painful exercise. Naturally, power doesn’t want that. And although I think Google is sort of well-intentioned, it still functions as a kind of traditional bureaucracy. Unfortunately, when we set up the Google ethics board, it was really in a climate when cancel culture was at its absolute peak. And our view was that we would basically have these nine independent members that, although they didn’t have legal powers to block a technology or to investigate beyond their scope, and they were dependent on what we, as Google DeepMind, showed them, it still was a significant step to providing external oversight on sensitive technologies that we were developing. But I think some people on Twitter and elsewhere felt that because we had appointed a conservative, the president of the Heritage Foundation, and she had made some transphobic and homophobic remarks in the past, quite serious ones, that meant that she should be cancelled, and she should be withdrawn from the board. And so within a few days of announcing it, people started campaigning on university campuses to force other people to step down from the board, because their presence on the board was complicit and implied that they condoned her views and stuff like this. And I just think that was a complete travesty, and really upsetting because we’d spent two years trying to get this board going, and it was a first step towards real outside scrutiny over very sensitive technologies that were being developed. And unfortunately, it all ended within a week, as three members of the nine stood down, and then eventually she stood down, and then we lost half the board in a week and it was just completely untenable. And then the company turned around and were like, “Why are we messing around with this? This is a waste of time.” Rob Wiblin: “What a pain in the butt.” Mustafa Suleyman: “Why would we bother? What a pain in the ass.”
[...]
What wasn’t effective, I can tell you, was the obsession with superintelligence. I honestly think that did a seismic distraction — if not disservice — to the actual debate. There were many more practical things. because I think a lot of people who heard that in policy circles just thought, well, this is not for me. This is completely speculative. What do you mean, ‘recursive self-improvement’? What do you mean, ‘AGI superintelligence taking over’?” The number of people who barely have heard the phrase “AGI” but know about paperclips is just unbelievable. Completely nontechnical people would be like, “Yeah, I’ve heard about the paperclip thing. What, you think that’s likely?” Like, “Oh, geez, that is… Stop talking about paperclips!” So I think avoid that side of things: focus on misuse.
This does not speak well about the power centers of our civilization. [...]
Rob Wiblin: Yeah. From your many years in the industry, do you understand the internal politics of AI labs that have staff who range all the way from being incredibly worried about AI advances to people who just think that there’s no problem at all, and just want everything to go as quickly as possible? I would have, as an outsider, expected that these groups would end up in conflict over strategy pretty often. But at least from my vantage point, I haven’t heard about that happening very much. Things seem to run remarkably smoothly. Mustafa Suleyman: Yeah. I don’t know. I think the general view of people who really care about AI safety inside labs — like myself, and others at OpenAI, and to a large extent DeepMind too — is that the only way that you can really make progress on safety is that you actually have to be building it. Unless you are at the coalface, really experimenting with the latest capabilities, and you have resources to actually try to mitigate some of the harms that you see arising in those capabilities, then you’re always going to be playing catchup by a couple of years. I’m pretty confident that open source is going to consistently stay two to three years behind the frontier for quite a while, at least the next five years. I mean, at some point, there really will be mega multibillion-dollar training runs, but I actually think we’re farther away from that than people realise. I think people’s math is often wrong on these things. Rob Wiblin: Can you explain that? Mustafa Suleyman: People talk about us getting to a $10 billion training run. That math does not add up. We’re not getting to a single training run that costs $10 billion. I mean, that is many years away, five years away, at least. Rob Wiblin: Interesting. Is it maybe that they’re thinking that it’ll have the equivalent compute of $10 billion in 2022 chips or something like that? Is maybe that where the confusion is coming in, that they’re thinking about it in terms of the compute increase? Because they may be thinking there’s going to be a training run that involves 100 times as much compute, but by the time that happens, it doesn’t cost anywhere near 100 times as much money. Mustafa Suleyman: Well, partly it’s that. It could well be that, but then it’s not going to be 10x less: it’ll be 2-3x less, because each new generation of chip roughly gives you 2-3x more FLOPS per dollar. But yeah, I’ve heard that number bandied around, and I can’t figure out how you squeeze $10 billion worth of training into six months, unless you’re going to train for three years or something. Rob Wiblin: Yeah, that’s unlikely. Mustafa Suleyman: Yeah, it’s pretty unlikely. But in any case, I think it is super interesting that open source is so close. And it’s not just open source as a result of open sourcing frontier models like Llama 2 or Falcon or these things. It is more interesting, actually, that these models are going to get smaller and more efficient to train. So if you consider that GPT-3 was 175 billion parameters in the summer of 2020, that was like three years ago, and people are now training GPT-3-like capabilities at 1.5 billion parameters or 2 billion parameters. Which still may cost a fair amount to train, because the total training compute doesn’t go down hugely, but certainly the serving compute goes down a lot and therefore many more people can use those models more cheaply, and therefore experiment with them. And I think that trajectory, to me, feels like it’s going to continue for at least the next three to five years.
(Emphasis mine) [...]
But as we said earlier, I’m not in the AGI intelligence explosion camp that thinks that just by developing models with these capabilities, suddenly it gets out of the box, deceives us, persuades us to go and get access to more resources, gets to inadvertently update its own goals. I think this kind of anthropomorphism is the wrong metaphor. I think it is a distraction. So the training run in itself, I don’t think is dangerous at that scale. I really don’t. And the second thing to think about is there are these overwhelming incentives which drive the creation of these models: these huge geopolitical incentives, the huge desire to research these things in open source, as we’ve just discussed. So the entire ecosystem of creation defaults to production. Me not participating certainly doesn’t reduce the likelihood that these models get developed. So I think the best thing that we can do is try to develop them and do so safely. And at the moment, when we do need to step back from specific capabilities like the ones I mentioned — recursive self-improvement and autonomy — then I will. And we should.
So Suleyman thinks it's OK to train bigger models because it isn't dangerous by itself; if he doesn't train bigger models this won't change other players' behavior, and he does not intend to implement RSI and autonomy. [...]
Rob Wiblin: Yeah. Many people, including me, were super blown away by the jump from GPT-3.5 to GPT-4. Do you think people are going to be blown away again in the next year by the leap to these 100x the compute of GPT-4 models? Mustafa Suleyman: I think that what people forget is that the difference between 3.5 and 4 is 5x. So I guess just because of our human bias, we just assume that this is a tiny increment. It’s not. It’s a huge multiple of total training FLOPS. So the difference between 4 and 4.5 will itself be enormous. I mean, we’re going to be significantly larger than 4 in time as well, once we’re finished with our training run — and it really is much, much better.
[...]
It’s much better that we’re just transparent about it. We’re training models that are bigger than GPT-4, right? We have 6,000 H100s in operation today, training models. By December, we will have 22,000 H100s fully operational. And every month between now and then, we’re adding 1,000 to 2,000 H100s. So people can work out what that enables us to train by spring, by summer of next year, and we’ll continue training larger models. And I think that’s the right way to go about it. Just be super open and transparent. I think Google DeepMind should do the same thing. They should declare how many FLOPS Gemini is trained on.
1 note · View note
corporateintel · 11 months
Text
The Uplifting Wisdom of Fred Smith
I recently enjoyed the privilege of participating in a small group online discussion with Frederick W, Smith, the founder and longtime CEO of Federal Express. Imagine being at the helm of a global disruptor like FedEx for an uncanny five decades. Think someone like that might have a few things to say about the life and times of business, society, and learning? You might be as surprised as I was…
Tumblr media
View On WordPress
0 notes
Text
I just realized ai singing stuff probably has more appeal to people who can't hear other voices in their head
0 notes
faetreides · 1 month
Text
Tumblr media Tumblr media Tumblr media Tumblr media
summary: rafe cameron x afab maid!reader
cw: titfucking, rimming/ass eating, collaring, power imbalance/dubcon, no real face slapping but reader gets rafe’s rings pressed into their face, gun mentions, rafe talks about wanting to do a line off reader’s tits, throwaway implication that his dad saw you, general rafe-esque warnings 💀, very plotless & possibly ooc (i’m new to the show but i’ve been lurking for a bit), rafe spits on reader, slight dumbification/objectification, hate sex coded but that's more bc i have a love/hate relationship with rafe, he calls reader a bitch once and a also a slut once, use of good girl
block & move on if uncomfortable !!
do not translate, repost, or give ai my work
kinktober masterlist
Tumblr media Tumblr media
This stupid carpet is hell on your knees. Not that there was any time to pull a pillow down under them, you were pulled into the room and shoved down so fast you got dizzy. You’re brought out of your ruminations by a rough palm seizing your face in its grasp and squeezing. 
Rafe huffs, leaning forward to make sure he didn’t miss the way your eyes widened as his fingers tightened. His gaudy rings are going to leave impressions on your cheeks but it’s hard to care about that right now. One second, you’re dusting off the son of your employer’s bedroom, and the next you’re getting a wad of split slung on your face. 
Tumblr media
Your pussy decides to be a traitor and clench in response. 
“Sorry ‘bout that………” Rafe trails off, flicking the spit off your cheek like he was picking at a persistent hangnail. 
The apology is as insincere as it could be but something about the bored inflection in his tone gets you wet. 
“It’s fine.” Your “ice princess facade” as he’s called it  falls apart a tad, an embarrassing heat blooming throughout your face. 
He seems satisfied with his attempt at amateur art and scoops the rest up with two of his fingers. He doesn’t ask you to clean them off, just shoves them in between your plump lips without a word. 
“You’re so fuckin’ messy, being such a shitty maid right now, you know that, babe?” He hums, giving your face one final squeeze. 
You’re not even sure he knows your name, he sure doesn’t act like it. All he does is coo at you condescendingly as you suckle on his fingers, telling you how much better you are at this. Once you’ve done an adequate job of polishing them off, he pulls the digits away and gives you a weak love tap. Rafe’s obviously wanting to wring something else out of you. 
You hate that your first instinct is to say “Yes, sir?” 
You also hate that it’s what actually fucking comes out of your mouth. 
The grin that splits his mouth reminds you of the only time you’ve ever successfully caught a mouse in an old fashioned trap. A vermin that used to disgust you until it stayed and you gave it a name. And then your mom has to turn you away from the sight of Jacque’s tiny body cleaved in two. 
“Get those fucking clothes off, now.” He orders you, palming himself through his khakis. "And toys don't talk back."
You roll your eyes and comply. You ignore Rafe's ramblings about how he wished his dad made you wear one of those skimpy made costumes without underwear, that he way he could stare at your pussy whenever you bent over. The door is wide open, you know you could just make a break for it if you wanted. But you kind of like how the humiliation twists your stomach in a knot. The air in the room gets so much hotter when you focus on the large bulge in front of your face.
As soon as your uniform is lying on the hardwood floor in a rumpled heap, your tits are being squished together. Rafe takes several moments to weigh each globe of flesh in his hands.
"Pretty tits, always wondered what they looked like under that stupid uniform. Wanted to make a mess of you so bad but you had to be all fuckin' stuck up and prissy." He hisses, digging his nails into your breasts.
He massages them in circular motions, forcing them to press together like he could cum untouched to the sight of it alone.
You obediently stay silent as you watch Rafe stagger to his feet and wrestle his leather belt out of his pants. His bottom lip is being toyed with to the point that tiny drops of blood are peeking out of the skin. The leather makes a thwack! sound as it passes through the final belt loop and flops around. Rafe continues to eye your tits like a hawk as he wraps the belt around his hand and kneels down to your level.
He tilts your head up with one finger under your chin, "This is going around your neck, okay? I don't have a leash to go with it, but I'll get one for next time."
You open your mouth to speak or maybe to moan at the vision of the expensive leather tensely coiled around your vulnerable neck like a snake about to strike. The warning look he gives you shut you up, but your damp panties made you want to push him further.
"Don't move a muscle."
The belt was warm to the touch, probably because of all the hours Rafe had spent on the golf course or wherever his "business" takes him. You stay perfectly still as he curled it around your neck, having to wrap it around you again due to the length. The metal belt buckle clicked as he fastens it, tugging it firmly to test how tight it was. It definitely feels like a weight baring down on you, but you seem to be able to breathe so he steps back again.
"There we go, pretty bitch just for me."
His pants fall to the ground unceremoniously, revealing the cock you may have had a stray wet dream or two about. Crowned by neatly and clearly obsessively trimmed hair, it looks about 7 inches and thicker than your forearm. His cock has a slight left curve, with a couple prominent veins and an almost reddish-pink colored tip that puffs out at the sides a bit.
Rafe's cockhead catches the drool that embarrassingly leaks out of your mouth, and you kitten lick the slit as you stare up at him through your lashes. You want to smile at the punched-out groan emanating from above you, but he might slap you for getting cocky, it wouldn't be unwelcome.
"You like it, babe? Yeah, I bet you do."
He brings your hands up to your tits and you pick up on what he wants you to do. Anticipating Rafe Cameron's needs is part of your job after all. You scrape the sides of your chipped painted nails against them as you softly cup and squish the globes together, creating a perfect pocket for him.
"Good girl." He chuckles, ruffling your hair like you were his pet.
He savors the wet slide of his cock through the valley of your breasts. You hold them impossibly closer together, ignoring the discomfort by getting lost in the game of peek a boo his tip is playing with you during every thrust. A near constant stream of precum is flowing from the silt and ending up all over the tops of your tits.
Rafe pants as he speeds up his thrusts, his pupils expanding as he takes in the spectacle of you hot dogging him with your tits. For how preppy he likes to act sometimes, he sure does seem to enjoy painting you with his bodily fluids. He weaves his hands down from their deadly hold on your hair to pinch and flick your nipples.
" 'G-gonna cream all over these gorgeous tits, get them messy, then snort some coke off your nipples after.”
It doesn't take as long as a man like him would prefer before he's spilling all over your heaving chest with a sound so inhuman you'd think he was possessed.
You're past caring if he sees you hungrily open your mouth as wide as possible in the hopes of catching some of his cum in your mouth. You grind your sopping wet cunt against the floor when you do, and fuck it tastes better than it has any right to.
A quiet 'shit' rings out and the room spins as you're swiftly flipped on your stomach. Rafe crowds behind you and yanks your hips up. You don't think much of it until you feel warm breath on your ass. You jolt in surprise, and he gives you a light smack on both cheeks before spreading them with his thumb.
"Bet you thought I wanted your pussy, huh? Well, this tiny hole right here looks much cuter, you can't blame me. We'll get you some cute plugs." Followed by a flat tongue licking a stripe over your rim. He gives your hole a strangely soft peck and then teases the tip of his tongue past the entrance.
You squeal, which you'd be mortified by if the sensation of Rafe's tongue filling up your ass didn't feel so good. The way he curls it and jabs it deeper between your cheeks in short busts is running a huge risk of causing you to go insane. It's like he's exploring every nook and cranny, you should be laughing because the man that treats you like a back-alley whore is up to his ears in your ass. His groans and grunts are muffled but they give you the confidence to be louder.
He drags his face away and hangs his tongue over you until a load of saliva drips down onto you. You shiver when it meets your hole. A high-pitched moan comes out when he massages it into the puckered skin with his thumb.
He dots sloppy open-mouthed kisses up and down your rim, nipping the flesh as he goes.
"I would say it's gonna be too tight, but sluts like you can take anything, right?"
You're too busy nodding to notice the sound of shoes hitting the floor in their rush to get away, or that the person wearing them softly closes the door behind them.
437 notes · View notes
mayoiayasep · 2 months
Text
people who need ai to make their favorite character sing their favorite songs are pathetic actually. youre telling me you dont already listen to that character sing and speak enough that you've memorize their vocal inflections and can thus just imagine them singing whatever they want? loser
169 notes · View notes
liamlawsonlesbian · 27 days
Text
what book I would give each current formula one driver to introduce them to the joy of reading
an intellectual exercise no one* asked for
Max Verstappen: Guns, Germs, and Steel by Jared Diamond - if you are nd and have read this book, you may understand me. otherwise just trust me. the impetus for this post
Checo Perez: The Trumpet of the Swan by E.B. White - this is an excellent read-aloud book for Sergio Jr.'s age, and there is nothing as wonderful as reading a compelling book to a kid you love, imho
Charles Leclerc: The Golden Compass by Phillip Pullman - he is on the record as a Potter enjoyer. also, I think he would enjoy having a little animal friend
Carlos Sainz: Priestdaddy by Patricia Lockwood - okay yes this is partially a joke about the title, but this is a hilarious and wonderful memoir, about weird families and Catholicism, and I think Carlos would enjoy it.
Lando Norris: Guards! Guards! by Terry Pratchett - in my mind Lando is a little bit like @bright-and-burning but less cool, so this fits. also, the combination of high number of jokes/page + action/mystery seems like a good fit
Oscar Piastri: Ancillary Justice by Ann Leckie - this book has the kind of mystery that really draws you in, plus I think Oscar would dig the questions about AI it digs into. I choose to believe with zero evidence that he would be interested in the funky gender stuff
Fernando Alonso: Cloud Atlas by David Mitchell - look me in the eye and tell me this book wasn't written for Fernando Alonso
Lance Stroll: Ender's Game by Orson Scott Card - yeah
Lewis Hamilton: Die Trying by Lee Child - Lewis deserves to read mildly trashy thrillers <3 plus there's a Tom Cruise movie
George Russell: Changing My Mind by Zadie Smith - as a proud Brit, George should be reading one of the premiere English authors of the 21st century. her first book of essays is a fun and readable place to start
Yuki Tsunoda: Station Eleven by Emily St. John Mandel - I don't have a Yuki-lore explanation, I just want to give him one of my favorite books
Daniel Ricciardo: The Gunslinger by Steven King - The Dark Tower series is Lord of the Rings-esque in scope but Western-inflected in aesthetic and written by The Horror Guy, I think DR would enjoy
Alex Albon: The Emperor of All Maladies: A Biography of Cancer by Siddhartha Mukherjee - I say this with so much love in my heart, but Alex wants to be seen as smart. this book is brilliantly written pop science
Logan Sargeant: Bloomability by Sharon Creech - yes this is a book for tween girls, but it's about boarding school in Switzerland, and Sharon Creech is a genius. if I could convince him to read it, I think he would love it
Valtteri Bottas: The Fellowship of the Ring by JRR Tolkien - what are hobbits if not humanoid moomins?
Zhou Guanyu: Piranesi by Susannah Clarke - a fun, exciting, stylishly written book for a stylish guy
Kevin Magnussen: Watership Down by Richard Adams - rabbit warfare <3
Nico Hulkenberg: A Gentleman in Moscow by Amor Towles - Hulk SEEMS like a Dad Who Reads Historical Fiction, even if he isn't yet
Pierre Gasly: Six of Crows by Leigh Bardugo - I almost said A Game of Thrones but I don't think that would be good for him. so, Six of Crows. he likes heists!
Esteban Ocon: City of Brass by S.A. Chakraborty - a superhero origin story of sorts for Mr. Spiderman
Bonus: Liam Lawson: Gideon the Ninth by Tamsyn Muir - lesbian from New Zealand. let me have this
*ro asked for it, take it up with them @oscarpiastriwdc
107 notes · View notes
reasonsforhope · 2 months
Text
"Major technology companies signed a pact on Friday to voluntarily adopt "reasonable precautions" to prevent artificial intelligence (AI) tools from being used to disrupt democratic elections around the world.
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. 
Twelve other companies - including Elon Musk's X - are also signing on to the accord...
The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio, and video "that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote".
The companies aren't committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. 
It notes the companies will share best practices and provide "swift and proportionate responses" when that content starts to spread.
Lack of binding requirements
The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but disappointed advocates were looking for stronger assurances.
"The language isn't quite as strong as one might have expected," said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. 
"I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we'll be keeping an eye on whether they follow through." ...
Several political leaders from Europe and the US also joined Friday’s announcement. European Commission Vice President Vera Jourova said while such an agreement can’t be comprehensive, "it contains very impactful and positive elements".  ...
[The Accord and Where We're At]
The accord calls on platforms to "pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression".
It said the companies will focus on transparency to users about their policies and work to educate the public about how they can avoid falling for AI fakes.
Most companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven't yet rolled out and the companies have faced pressure to do more.
That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving companies to largely govern themselves.
The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law [in the US], but that doesn't cover audio deepfakes when they circulate on social media or in campaign advertisements.
Many social media companies already have policies in place to deter deceptive posts about electoral processes - AI-generated or not... 
[Signatories Include]
In addition to the companies that helped broker Friday's agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.
Notably absent is another popular AI image-generator, Midjourney. The San Francisco-based startup didn't immediately respond to a request for comment on Friday.
The inclusion of X - not mentioned in an earlier announcement about the pending accord - was one of the surprises of Friday's agreement."
-via EuroNews, February 17, 2024
--
Note: No idea whether this will actually do much of anything (would love to hear from people with experience in this area on significant this is), but I'll definitely take it. Some of these companies may even mean it! (X/Twitter almost definitely doesn't, though).
Still, like I said, I'll take it. Any significant move toward tech companies self-regulating AI is a good sign, as far as I'm concerned, especially a large-scale and international effort. Even if it's a "mostly symbolic" accord, the scale and prominence of this accord is encouraging, and it sets a precedent for further regulation to build on.
145 notes · View notes
cursed-nyxan · 3 months
Text
Touchstarved characters' names in my mother language
This was inspired by redspringstudio's pronunciation guide. Coz I just found it super interesting, that a few of the characters' names resemble Hungarian words.
Leander- It's the same as in english, poisonous flower.
Kuras- I've mentioned this before, and I'll keep repeating this till I die, "kúrás" means fucking in Hungarian.
Mhin- Doesn't really have a meaning. 'Min' can be an inflected form of 'mi' (what), but I don't think that counts.
Ais- If it sounds like ace, then it also sounds like 'ész' which means mind. It just fits so well because of the whole groupmind thing.
Vere- Not the exact same pronunciation, but it sounds similar to 'vér'. It means blood. Why is this also fitting
Another fun fact:
The only difference between the words sparrow and bloodhound is the place of the accent. Sparrow is 'veréb" while bloodhound is 'véreb'.
66 notes · View notes
thevoidcannotbefilled · 2 months
Text
What if the voices are essentially spooky AI?
One of the modern concerns with AI is stealing VA's voices. Essentially instead of paying voice actors, you train an AI to copy their inflections.
The "dataset" in this case would be the tapes found possibly in the transfer between worlds. They may not be literally uploaded, hell there could be a step missing we're not entirely aware of atm, but with this line of thinking, Chester and Norris as we know them are not Jon and Martin but rather an interpretation of them created from the statements.
And if that's the case, it would make sense if Chester was a bit more aware. Jon was firstly more connected to the fears, but he obviously read a lot more statements than Martin. The more data, the more accuracy.
But AI of course isnt a the real person and in this hypothetical "Chester" then isn't Jon. Or at the very least we can't 100% trust he would act or have the same intentions as him. There will always be a part of him working with incomplete information (after all, even we didn't see every moment between the tapes).
That being said, even if Chester isn't a 1 to 1 copy to him, if he has enough memory to have be conscious, and his literal memory is of Jon's recording, why wouldn't Chester think he's Jon? Even if the Jon we know is somewhere else (whatever that means), whatever Chester is also could think it's Jon too.
So basically, we could be dealing with a spooky AI of Jon who isn't actually Jon but thinks he is, and is doing what he thinks Jon would do if he was stuck in a computer with limited ways of communicating but in reality is acting based on the limited data provided by the fears and/or tapes.
Chester may have an identity crisis is what I'm saying.
54 notes · View notes
hyperfixat · 7 months
Text
AI LESS WHUMPTOBER DAY EIGHT PANIC ATTACK
support and engagement would really motivate me to help post and work on the rest of this stuff!
(@ailesswhumptober)
VERY MINOR LESSON 16 SPOILERS
“Want a hug?” Simple words. Nothing mean or hostile about the inflection, but it sends a jolt of fear through your very core.
You scramble away from Belphegor, and back up right into Satan’s arms. He’d been walking over to sit near you, and the timing worked out, so that you bumped into him.
“Huh?” Satan’s hands ghost over your sides. He’s holding you, but leaving an out to run away.
“Y/N? What’s wrong?” Belphie tilts his head innocently. Those violet eyes of his innocent as a lamb’s.
Murderer.
You want to scream, but all you manage is a whimper.
You turn your face into Satan’s chest and pray he will hold you close and not let him get to you. Please, keep Belphegor away.
Satan’s hardly confused, half expected you snapping like this at some point. He saw the shock sink on that first night, the night you died. He saw the signs, the flinching, the change in appetite, he saw you.
And he holds you close.
“Belphegor, please leave.” It sounds polite, almost. The way Satan forms the words, like he’s speaking to an unruly stranger rather than his brother.
“But-.” Belphegor's protest falls short.
“Leave.” Satan’s chest rumbles against your face.
Smart, safe Satan. You had been so dumb.
It takes a minute to realize that Satan is calling your name. With your heartbeat so loud in your ears the outside world had faded so far.
“Hm?” It’s a choked, restrained sound. Weak.
“Are you doing alright?” Satan’s hand smooths over your head, petting your hair down comfortingly.
“I- I don’t know what came over me there… I should— I need to apologize…”
“You don’t need to do anything.” Satan disagrees, a frown heavy in his voice. “Let us both calm down for a bit. We can discuss what happened later when we both have clearer heads.”
99 notes · View notes
metastable1 · 9 months
Text
PSA: Inflection, OpenAI, and Anthropic effectively announced their timeliness to be less than 5 years to AGI
Inflection:
1) https://www.technologyreview.com/2023/07/14/1076296/mustafa-suleyman-my-new-turing-test-would-see-if-ai-can-make-1-million/ 2) https://www.barrons.com/articles/ai-chatbot-siri-alexa-inflection-pi-fa1809f8?mod=hp_LATEST
Tumblr media Tumblr media
OpenAI:
https://openai.com/blog/introducing-superalignment
Tumblr media
Anthropic:
https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai
Tumblr media Tumblr media
0 notes