Tumgik
#Mustafa Suleyman
metastable1 · 6 months
Text
0 notes
richdadpoor · 8 months
Text
ElevenLabs' AI Voice Generator Can Fake Voices in 30 Languages
What’s become one of the internet’s go-to companies for creating realistic enough visual deepfakes now has the ability to clone your voice and force it to speak in a growing variety of tongues. ElevenLabs announced Tuesday its new voice cloning now supports 22 more languages than it did previously, including Ukrainian, Korean, Swedish, Arabic, and more. ChatGPT’s Creator Buddies Up to Congress |…
Tumblr media
View On WordPress
0 notes
mknewsmedia · 9 months
Text
Tumblr media
Should an AI bot making $1m really be the next Turing test?
1 note · View note
palaceoftears · 29 days
Text
Tumblr media Tumblr media
I don't know why I am
The way I am
Not strong enough to be your man
- Not Strong Enough, boygenius
25 notes · View notes
awkward-sultana · 3 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Magnificent Century + Faceless Sultanas: Nurbanu Sultan
49 notes · View notes
magnificentlyreused · 1 month
Text
Tumblr media Tumblr media Tumblr media
This black and golden vest was first worn by Sultan Süleyman I in the eighth episode of the first season of Magnificent Century. It appeared again on Şehzade Mustafa in the nineteenth episode of the fourth season.
The vest was also worn by Sultan Murad IV in the second episode of the second season of Magnificent Century: Kösem.
15 notes · View notes
julyzaa · 2 months
Text
why is everything blamed on the evil adviser/ambitious wife/concubine these characters have not considered the person may be fucking dumb
15 notes · View notes
hurreminkizi · 7 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media
22 notes · View notes
garnetbutterflysblog · 8 months
Text
“Let’s have a chat, man to man”- Good, Suleiman! Break the news in private and don't treat him like he can't understand. Children are often smarter than we realize.
“The baby changed his mind because you didn’t want him”- NO SULEIMAN! BAD SULEIMAN! No amount of “my Mustafa is worth the world” is going to fix you blaming him (or implying blame) when his concern is why is his mother crying!
What the hell did Yavuz Sultan Selim do to you that would make you think for a second that your response was ok on any level?!
Someone give this guy a parenting book! And a whack upside his head! Don’t tell me we have Bible fanfiction by Dante but no parenting books in this time period!
20 notes · View notes
skenosbisworld · 10 months
Text
Ok, I need to rage about something for a sec.
WHY TF DOES HURREM BOW TO MUSTAFA?! AND ON MULTIPLE OCCASIONS!! IT MAKES NO SENSE!!
But seriously, there is no logic in this, and the decision to make Hurrem do this repeatedly is so disrespectful to both her character and her position.
Hurrem is Haseki Sultan. She is Sultan Suleiman's only legal wife, and she is the mother of 4 of his sons. These 4 sons who are in equal standing to Mustafa. For some dumb reason, the show likes to forget that the Ottomans weren't European. The Ottomans did not abide by the eldest son being the heir rule ever. All sehzades had an equal right, and the Sultan only had a specific heir if he himself appointed him. But for some reason, the show completely ignores this, and it pretends that Mustafa was some "Crown Prince." I like Mustafa, but he was never the legal heir.
This is also comparable to whenever it has Mahidevran bow to Ibrahim, which is equally frustrating. The family of the Sultan (which includes his consorts) are always of a higher standing than any government official.
These instances are honestly more frustrating to me than when Hurrem continued to be called a hatun even after giving birth to Mehmed, which by show logic entitled her to the Sultan title, because it was done intentionally as an insult. Having Hurrem bow to Mustafa, and having Mahidevran bow to Ibrahim was both always done with all seriousness.
Everytime it happened, I felt both the overwhelming urge to slap all the characters involved and scream into a deep empty void.
I really hope that I am not alone in my intense frustration with the show's seeming lack of awareness/respect of any proper protocol or common sense.
40 notes · View notes
metastable1 · 8 months
Text
Quotes from transcript for future reference:
Mustafa Suleyman: Well, I think it’s really important, especially for this audience, to distinguish between the model itself being dangerous and the potential uses of these technologies enabling people who have bad intentions to do serious harm at scale. And they’re really fundamentally different. Because going back to your first question, the reason I said that I don’t see any evidence that we’re on a trajectory where we have to slow down capabilities development because there’s a chance of runaway intelligence explosion, or runaway recursive self-improvement, or some inherent property of the model on a standalone basis having the potential in and of itself to cause mass harm: I still don’t see that, and I stand by a decade timeframe.
[...]
Rob Wiblin: OK, so maybe the idea is in the short term, over the next couple of years, we need to worry about misuse: a model with human assistance directed to do bad things, that’s an imminent issue. Whereas a model running somewhat out of control and acting more autonomously without human support and against human efforts to control it, that is more something that we might think about in 10 years’ time and beyond. That’s your guess? Mustafa Suleyman: That’s definitely my take. That is the key distinction between misuse and autonomy. And I think that there are some capabilities which we need to track, because those capabilities increase the likelihood that that 10-year event might be sooner. For example, if models are designed to have the ability to operate autonomously by default: so as an inherent design requirement, we’re engineering the ability to go off and design its own goals, to learn to use arbitrary tools to make decisions completely independently of human oversight. And then the second capability related to that is obviously recursive self-improvement: if models are designed to update their own code, to retrain themselves, and produce fresh weights as a result of new fine-tuning data or new interaction data of any kind from their environment, be it simulated or real world. These are the kinds of capabilities that should give us pause for thought.
[...]
And at Inflection, we’re actually not working on either of those capabilities, recursive self-improvement and autonomy. I’ve chosen a product direction which I think can enable us to be extremely successful without needing to work on that. I mean, we’re not an AGI company; we’re not trying to build a superintelligence. We’re trying to build a personal AI. Now, that is going to have very capable AI-like qualities; it is going to learn from human feedback; it is going to synthesise information for you in ways that seem magical and surprising; it’s going to have a lot of access to your personal information. But I think the quest to build general-purpose learning agents which have the ability to perform well in a wide range of environments, that can operate autonomously, that can formulate their own goals, that can identify new information in environments, new reward signals, and learn to use that as self supervision to update their own weights over time: this is a completely different quality of agent, that is quite different, I think, to a personal AI product.
(Emphasis mine.) Very admirable, but that means their AI will be less general therefore less capable, therefore less useful, therefore less appealing, and economically valuable. They will be outcompeted by other companies who will pursue generality and agency.
On the open source thing: I think I’ve come out quite clearly pointing out the risks of large-scale access. I think I called it “naive open source – in 20 years’ time.” So what that means is if we just continue to open source absolutely everything for every new generation of frontier models, then it’s quite likely that we’re going to see a rapid proliferation of power. These are state-like powers which enable small groups of actors, or maybe even individuals, to have an unprecedented one-to-many impact in the world.
[...]
We’re going to see the same trajectory with respect to access to the ability to influence the world. You can think of it as related to my Modern Turing Test that I proposed around artificial capable AI: like machines that go from being evaluated on the basis of what they say — you know, the imitation test of the original Turing test — to evaluating machines on the basis of what they can do. Can they use APIs? How persuasive are they of other humans? Can they interact with other AIs to get them to do things? So if everybody gets that power, that starts to look like individuals having the power of organisations or even states. I’m talking about models that are two or three or maybe four orders of magnitude on from where we are. And we’re not far away from that. We’re going to be training models that are 1,000x larger than they currently are in the next three years. Even at Inflection, with the compute that we have, will be 100x larger than the current frontier models in the next 18 months. Although I took a lot of heat on the open source thing, I clearly wasn’t talking about today’s models: I was talking about future generations. And I still think it’s right, and I stand by that — because I think that if we don’t have that conversation, then we end up basically putting massively chaotic destabilising tools in the hands of absolutely everybody. How you do that in practise, somebody referred to it as like trying to catch rainwater or trying to stop rain by catching it in your hands. Which I think is a very good rebuttal; it’s absolutely spot on: of course this is insanely hard. I’m not saying that it’s not difficult. I’m saying that it’s the conversation that we have to be having.
(Emphasis mine) [...]
And I think that for open sourcing Llama 2, I personally don’t see that we’ve increased the existential risk to the world or any catastrophic harm to the world in a material way whatsoever. I think it’s actually good that they’re out there.
[...]
Rob Wiblin: Yeah. While you were involved with DeepMind and Google, you tried to get a broader range of people involved in decision making on AI, at least inasmuch as it affected broader society. But in the book you describe how those efforts more or less came to naught. How high a priority is solving that problem relative to the other challenges that you talk about in the book? Mustafa Suleyman: It’s a good question. I honestly spent a huge amount of my time over the 10 years that I was at DeepMind trying to put more external oversight as a core function of governance in the way that we build these technologies. And it was a pretty painful exercise. Naturally, power doesn’t want that. And although I think Google is sort of well-intentioned, it still functions as a kind of traditional bureaucracy. Unfortunately, when we set up the Google ethics board, it was really in a climate when cancel culture was at its absolute peak. And our view was that we would basically have these nine independent members that, although they didn’t have legal powers to block a technology or to investigate beyond their scope, and they were dependent on what we, as Google DeepMind, showed them, it still was a significant step to providing external oversight on sensitive technologies that we were developing. But I think some people on Twitter and elsewhere felt that because we had appointed a conservative, the president of the Heritage Foundation, and she had made some transphobic and homophobic remarks in the past, quite serious ones, that meant that she should be cancelled, and she should be withdrawn from the board. And so within a few days of announcing it, people started campaigning on university campuses to force other people to step down from the board, because their presence on the board was complicit and implied that they condoned her views and stuff like this. And I just think that was a complete travesty, and really upsetting because we’d spent two years trying to get this board going, and it was a first step towards real outside scrutiny over very sensitive technologies that were being developed. And unfortunately, it all ended within a week, as three members of the nine stood down, and then eventually she stood down, and then we lost half the board in a week and it was just completely untenable. And then the company turned around and were like, “Why are we messing around with this? This is a waste of time.” Rob Wiblin: “What a pain in the butt.” Mustafa Suleyman: “Why would we bother? What a pain in the ass.”
[...]
What wasn’t effective, I can tell you, was the obsession with superintelligence. I honestly think that did a seismic distraction — if not disservice — to the actual debate. There were many more practical things. because I think a lot of people who heard that in policy circles just thought, well, this is not for me. This is completely speculative. What do you mean, ‘recursive self-improvement’? What do you mean, ‘AGI superintelligence taking over’?” The number of people who barely have heard the phrase “AGI” but know about paperclips is just unbelievable. Completely nontechnical people would be like, “Yeah, I’ve heard about the paperclip thing. What, you think that’s likely?” Like, “Oh, geez, that is… Stop talking about paperclips!” So I think avoid that side of things: focus on misuse.
This does not speak well about the power centers of our civilization. [...]
Rob Wiblin: Yeah. From your many years in the industry, do you understand the internal politics of AI labs that have staff who range all the way from being incredibly worried about AI advances to people who just think that there’s no problem at all, and just want everything to go as quickly as possible? I would have, as an outsider, expected that these groups would end up in conflict over strategy pretty often. But at least from my vantage point, I haven’t heard about that happening very much. Things seem to run remarkably smoothly. Mustafa Suleyman: Yeah. I don’t know. I think the general view of people who really care about AI safety inside labs — like myself, and others at OpenAI, and to a large extent DeepMind too — is that the only way that you can really make progress on safety is that you actually have to be building it. Unless you are at the coalface, really experimenting with the latest capabilities, and you have resources to actually try to mitigate some of the harms that you see arising in those capabilities, then you’re always going to be playing catchup by a couple of years. I’m pretty confident that open source is going to consistently stay two to three years behind the frontier for quite a while, at least the next five years. I mean, at some point, there really will be mega multibillion-dollar training runs, but I actually think we’re farther away from that than people realise. I think people’s math is often wrong on these things. Rob Wiblin: Can you explain that? Mustafa Suleyman: People talk about us getting to a $10 billion training run. That math does not add up. We’re not getting to a single training run that costs $10 billion. I mean, that is many years away, five years away, at least. Rob Wiblin: Interesting. Is it maybe that they’re thinking that it’ll have the equivalent compute of $10 billion in 2022 chips or something like that? Is maybe that where the confusion is coming in, that they’re thinking about it in terms of the compute increase? Because they may be thinking there’s going to be a training run that involves 100 times as much compute, but by the time that happens, it doesn’t cost anywhere near 100 times as much money. Mustafa Suleyman: Well, partly it’s that. It could well be that, but then it’s not going to be 10x less: it’ll be 2-3x less, because each new generation of chip roughly gives you 2-3x more FLOPS per dollar. But yeah, I’ve heard that number bandied around, and I can’t figure out how you squeeze $10 billion worth of training into six months, unless you’re going to train for three years or something. Rob Wiblin: Yeah, that’s unlikely. Mustafa Suleyman: Yeah, it’s pretty unlikely. But in any case, I think it is super interesting that open source is so close. And it’s not just open source as a result of open sourcing frontier models like Llama 2 or Falcon or these things. It is more interesting, actually, that these models are going to get smaller and more efficient to train. So if you consider that GPT-3 was 175 billion parameters in the summer of 2020, that was like three years ago, and people are now training GPT-3-like capabilities at 1.5 billion parameters or 2 billion parameters. Which still may cost a fair amount to train, because the total training compute doesn’t go down hugely, but certainly the serving compute goes down a lot and therefore many more people can use those models more cheaply, and therefore experiment with them. And I think that trajectory, to me, feels like it’s going to continue for at least the next three to five years.
(Emphasis mine) [...]
But as we said earlier, I’m not in the AGI intelligence explosion camp that thinks that just by developing models with these capabilities, suddenly it gets out of the box, deceives us, persuades us to go and get access to more resources, gets to inadvertently update its own goals. I think this kind of anthropomorphism is the wrong metaphor. I think it is a distraction. So the training run in itself, I don’t think is dangerous at that scale. I really don’t. And the second thing to think about is there are these overwhelming incentives which drive the creation of these models: these huge geopolitical incentives, the huge desire to research these things in open source, as we’ve just discussed. So the entire ecosystem of creation defaults to production. Me not participating certainly doesn’t reduce the likelihood that these models get developed. So I think the best thing that we can do is try to develop them and do so safely. And at the moment, when we do need to step back from specific capabilities like the ones I mentioned — recursive self-improvement and autonomy — then I will. And we should.
So Suleyman thinks it's OK to train bigger models because it isn't dangerous by itself; if he doesn't train bigger models this won't change other players' behavior, and he does not intend to implement RSI and autonomy. [...]
Rob Wiblin: Yeah. Many people, including me, were super blown away by the jump from GPT-3.5 to GPT-4. Do you think people are going to be blown away again in the next year by the leap to these 100x the compute of GPT-4 models? Mustafa Suleyman: I think that what people forget is that the difference between 3.5 and 4 is 5x. So I guess just because of our human bias, we just assume that this is a tiny increment. It’s not. It’s a huge multiple of total training FLOPS. So the difference between 4 and 4.5 will itself be enormous. I mean, we’re going to be significantly larger than 4 in time as well, once we’re finished with our training run — and it really is much, much better.
[...]
It’s much better that we’re just transparent about it. We’re training models that are bigger than GPT-4, right? We have 6,000 H100s in operation today, training models. By December, we will have 22,000 H100s fully operational. And every month between now and then, we’re adding 1,000 to 2,000 H100s. So people can work out what that enables us to train by spring, by summer of next year, and we’ll continue training larger models. And I think that’s the right way to go about it. Just be super open and transparent. I think Google DeepMind should do the same thing. They should declare how many FLOPS Gemini is trained on.
1 note · View note
Episode 17 being entertaining, annoying, and infuriating.
Sparse comments
1) The Leo thing is moving at a snaaaaaail's pace.
I know this is gonna go on for long, so I'm just making peace with it.
2) What a fking random way to start Nigarbrahim. (Talking to the garden scene where she is randomly the one accompanying Hatice)
I know that Stockholm Syndrome is the name of the game in this show but damn this is honestly offensive.
3) Leo and Matrakçi should co-found a Get Over Her support group.
4) Mocenigo returns, this time convinced there's no way Süleyman will be able to turn the gift of a clock into not-so-veiled egomaniacal threat.
Of course, Süleyman relishes the challenge and does exactly that.
5) The janissaries being upset for Ibrahim's success is freaking hilarious given they are recruited the same way he was.
6) Gülsah and Gülnihal are honestly baffling in how much they're willing to be stepped over.
Even more for Gülsah given she even VOLUNTEERED to another murder attempt, as if Mahidevran did not literally throw her under the galley the first time.
7) Everyone is STILL ignoring the murderer still at large
8) WHAT THE F**K WAS THAT TIMESKIP WHAT YEAR IS IT WHO PUT MUSTAFA IN AN AGING CAPSULE-
15 notes · View notes
chaos-of-the-abyss · 8 months
Text
i like mustafa, but the way he's completely front and center of suleiman's sons and mehmed is basically just a prop for the fraying in mustafa and suleiman's relationship is annoying. i don't know much about ottoman history but according to this source, mehmed was a "prince of more exquisite qualities than even mustafa. he had a piercing intellect and a subtle judgment." it would have been nice to... i don't know... see that?
12 notes · View notes
palaceoftears · 5 months
Text
Tumblr media
Rewatched the last Fatma scenes some days ago and I can't stop thinking of what would've Mahi told her after she lost her baby & attacked Ayse. Idk I feel sad that such opportunity to see what Mahi thinks of her s1 actions on s3 & how this makes her bond with Fatma was thrown away so easily. Pretty much like her miscarriage was never ever touched again during the series and this was a chance for her mature self to talk about it? Still even when we didn't get to see it, I think the way she wanted to handle Fatma's situation it's admirable and like one of the few cases when a concubine's mental health it's taken into consideration. Maybe that's what she feels would've helped her back on s1 (and she did experience how taking distance stopped her of being absorbed by palace stuff) and that's why she acts like that.
11 notes · View notes
awkward-sultana · 10 months
Text
Tumblr media Tumblr media Tumblr media
"Cihangir, my lion son..."
83 notes · View notes
damaseclipsadas · 1 year
Text
Tumblr media Tumblr media Tumblr media Tumblr media
Mahidevran Hatun foi uma das concubinas de Süleyman I e a mãe de Şehzade Mustafa, o filho mais velho do sultão a atingir a idade adulta.
20 notes · View notes