Tumgik
#CS: talks
celestialsonata7 · 10 months
Note
I can buy Pat being the crooked man and the energy corrupting him, but I don’t like how they treated the actually development that led up to it, does that makes sense? He is entitled, has boundary and anger/ jealousy issues. I can buy that, but his arc I’m not sure if it needed more time or if it was because of how sudden his change was or because you grew to like his character through the series.
I didn’t like how they handled the whole “he is always been evil and is impossible for him to change and any attempts to change are useless or another form of manipulation” I think the series does a very good job at showing his perspective and making you feel sorry for him at tikes and even empathize with what he is going through to just be like “oh, he is just evil and has always been like that and when everyone thought he was dead nobody cared and resented him for still being alive”
(I acknowledge that what he did was 100% wrong, he did terrible things Nicole was in her right to hate him and be distrustful of him. He is a very scary person because guys like him actually exist in real life)
Is weird because I’ve seen other shows do similar arcs to this one where the villain that turned evil due to their own personal flaws, shortcomings and issues and somehow still being unredeemable and dying at the end, but still the narrative being able to show them some kind of sympathy or acknowledgement/understanding towards their issues and still having them be unredeemable and die in the end. Sadly I don’t think Pat’s character arc had rbis.
How do you feel about the way his character was handled? Am I the only one who feels this way?
Hello Anon! Thank you for asking this! I could launch into a whole essay about what I think about this. XD So I think I will. Strap in, folks!
For those confused, this is pertaining to the Netflix series, 'Raising Dion', and contains MAJOR spoilers, so be forwarned if you plan on watching it.
(I'd like to premise this by saying it's been a hot minute since I watched the show, so I'm sorry if my recollection of its events is a little fuzzy.
I would also like to state that I freaking looooove Jason Ritter, and his perfomance in this show is amazing as always, and the reason that the character arc of Pat vexes me so is that I felt they did Jason dirty.)
So, I absolutely agree that it felt like his whole reveal was definitely rushed through, which is really poor writing on their part considering they knew where they were gonna take the character from the beginning. They had all that time to work out how they wanted to present the character and he still didn’t feel fleshed out enough. They just wanted to make the twist so surprising for the audience for shock value that they made sure we would never suspect him until the reveal. I can buy that people lie about themselves, they hide certain traits and motivations, sure, but for him to do a complete 180 of his character was just too ridiculous, especially when you consider how irrevocably determined and devoted he was to his goals once the reveal happened. Either he’s a hurting and desperate man who’s only doing what he needs to survive, OR he’s an evil being bent on making others suffer, you can’t have it both ways.
I think the main reason Pat is so unbelieveable as a character is that they kept flip-flopping him. He would CONSTANTLY contradict himself; "Dion, always listen to your Mom, BUT also let's have pizza even though she said 'no'", "Dion, only use your powers for good, BUT also let’s cheat at this basketball game", "Dion, good guys are always true, BUT definitely lie about using your powers in public", "Dion, I'm happy you got superpowers and can fight evil, BUT it definitely should have been me instead because I was there and everyone else got powers so I should have too", "Nicole, I like you, however if you don't feel the same way, I'm okay with that, BUT how dare you kiss other guys when you should be with me", like, it's not even bipolar, it's just bad writing. His motivations and desires are constantly changing that it just feels random and disorganized and done on a whim. It felt rushed and random simply because you never know what he wants. He wants to be a good godfather and friend to Dion and Nicole but no actually he just wants to kill them, but no actually he'd never hurt them, but no actually he'd do anything for power. It just doesn't make any sense.
People can have major flaws and still be good people, and people can have good traits and still be bad people, but these traits usually line up with a person's motivations, i.e. "I need to survive but I'll try not to kill if I don't have to, and if I do have to it will upset me because I still value human life. If there was another way, I'd take it but there isn't and that sucks", or "I value YOUR life because I'm attached to you but everything and everyone else is meaningless to me and I have no qualms with ending them for my own gain". You can't have, "I wish I was a better person except just kidding no I don't", it doesn't work that way.
You can say that Pat's motivation from the beginning, to the very end, was power; all he wanted was power, that was his end goal, that was what he always strived for, but then his actions should back that up. If all he wanted was power, why did he waste his time raising Dion, teaching him how to use his powers, helping Nicole, being a good friend to her, working at his job, hoping for promotions? Why not just go out and collect all the powered people at once, keep absorbing people, keep gaining power? You'll say, "well he wanted to wait for Dion to get stronger so he could absorb more power, he needed to be close to Nicole to be close to Dion, he needed the resources of his company to find more powered people", okay, yes, that all makes sense. If that was what he was planning the whole time, then that shows that he's smart and conniving. So everything he did was in service to that main goal; he helps Dion become more powerful so he can take all that power for himself. But he inadvertently creates his downfall, he "dies", all his power is gone, over and done with. Then WHY come back to "warn" everyone of another rising threat? You could say, "he wanted to get his power back", okay, but how? He no longer had the power to absorb people and he didn't even know he COULD get it back, he just knew it was still out there killing, and if he DID know he could get it back, why not go to where the power is in the first place? Why bother with going back to the city and the job and the people that would never help him in a million years? He KNEW he would be shunned and arrested and locked up forever if he showed his face there, and without any power he had no way of escaping, so why go back? He also knew they didn't understand that power either anyways; a superpowered eight-year-old beat him, not the highly educated and technologically advanced company, so what did he think he could gain from going back there? We already know he's smarter than that, if the above is true. By all accounts it doesn't make sense.
You could say the desire for power solely came from the crooked energy, and season one was all just Crooked Man-Pat, and once that was gone the true Pat came back, and he really DID just want to help and redeem himself, fine. But then he should not have been capable of reverting back to Crooked!Pat at the end, before the energy was even back in him. If all he wanted to do was help, then the concept of being powerful would mean nothing to him. The desire for power, the deceitful nature, the entitlement and selfishness should not have been there while he was "trying to help". You can't be selfless and selfish at the same time. You can't be forthcoming and deceitful, you can't be humble and entitled. All those things contradict each other. Again, it doesn't make sense.
Another thing that really really irked me while watching the series was that, from the very beginning, it seemed like Nicole never really liked Pat anyway. She was ALWAYS cold and distant with him, as if she already knew he was bad news even when he was "trying" to be kind and thoughtful. I don't know if that was a directorial problem or an acting choice but it felt so unfounded and random, and it made all the "nice" scenes feel forced. Not to mention Nicole was far too quick to just accept that Pat was evil all along, it just pulled away from their relationship so much so that it felt like they didn't even have one, like the only thing connecting them was Dion. Again, I don't know it that was intentional, or even if anyone else felt that way, but that's how it came across to me and it made it really hard to connect to either of them. Honestly, it felt like Pat had more chemistry with Dion than with her.
The only reason I fell in love with Pat at the beginning, and continued to route for his redemption until the end, was because, as stated at the top, I love Jason Ritter. I've seen many of his roles and applauded all his performances and so yes, I'm biased, but as I'm sure everyone else can agree, he has a kind face, and plays enduring really really well. He gets cast as kind people quite a lot, so anyone that has seen him before will already feel a sense of comfort when he's on screen. I think that is really only reason that Pat is so likeable at the start, and possibly feels redeemable at the beginning of season two; because people see this kind face and can see him being capable of kindness. But it's all for naught because of the way that the show and character is written; there's only so much you can do with what you're given. The script of 'Raising Dion' was basically a mess of tangled Christmas lights that are impossible to untangle, and the more they tried, the more tangled it got.
tl;dr Pat's character was really poorly handled from a writing standpoint; nothing he did made sense and his motivations were all over the place and his character was never consistent. All that is an absolute, crying shame because Jason Ritter was amazing in it anyways and I wish I could have enjoyed it more.
I hope that answered your question, and if not I'm sorry. I've never really written my thoughts on anything like this before and again, it's been so long since I saw the show so I may have gotten some things wrong or misremembered some things. But I had fun writing this, it's been a while since I've written ANYTHING and I really really appreciate you wanting to know my opinions, no one ever wants that. XD Hope you have a lovely day, Anon!
12 notes · View notes
evelili · 20 days
Note
What's your college major (if you feel comfortable answering obviously)?
on paper? computer science. tho recently ive been feeling like this image:
Tumblr media
1K notes · View notes
sexygaywizard · 6 months
Text
My coworker at my school is a compsci major and she doesn't know how to save a PDF 😭
680 notes · View notes
nostalgebraist · 4 months
Text
information flow in transformers
In machine learning, the transformer architecture is a very commonly used type of neural network model. Many of the well-known neural nets introduced in the last few years use this architecture, including GPT-2, GPT-3, and GPT-4.
This post is about the way that computation is structured inside of a transformer.
Internally, these models pass information around in a constrained way that feels strange and limited at first glance.
Specifically, inside the "program" implemented by a transformer, each segment of "code" can only access a subset of the program's "state." If the program computes a value, and writes it into the state, that doesn't make value available to any block of code that might run after the write; instead, only some operations can access the value, while others are prohibited from seeing it.
This sounds vaguely like the kind of constraint that human programmers often put on themselves: "separation of concerns," "no global variables," "your function should only take the inputs it needs," that sort of thing.
However, the apparent analogy is misleading. The transformer constraints don't look much like anything that a human programmer would write, at least under normal circumstances. And the rationale behind them is very different from "modularity" or "separation of concerns."
(Domain experts know all about this already -- this is a pedagogical post for everyone else.)
1. setting the stage
For concreteness, let's think about a transformer that is a causal language model.
So, something like GPT-3, or the model that wrote text for @nostalgebraist-autoresponder.
Roughly speaking, this model's input is a sequence of words, like ["Fido", "is", "a", "dog"].
Since the model needs to know the order the words come in, we'll include an integer offset alongside each word, specifying the position of this element in the sequence. So, in full, our example input is
[ ("Fido", 0), ("is", 1), ("a", 2), ("dog", 3), ]
The model itself -- the neural network -- can be viewed as a single long function, which operates on a single element of the sequence. Its task is to output the next element.
Let's call the function f. If f does its job perfectly, then when applied to our example sequence, we will have
f("Fido", 0) = "is" f("is", 1) = "a" f("a", 2) = "dog"
(Note: I've omitted the index from the output type, since it's always obvious what the next index is. Also, in reality the output type is a probability distribution over words, not just a word; the goal is to put high probability on the next word. I'm ignoring this to simplify exposition.)
You may have noticed something: as written, this seems impossible!
Like, how is the function supposed to know that after ("a", 2), the next word is "dog"!? The word "a" could be followed by all sorts of things.
What makes "dog" likely, in this case, is the fact that we're talking about someone named "Fido."
That information isn't contained in ("a", 2). To do the right thing here, you need info from the whole sequence thus far -- from "Fido is a", as opposed to just "a".
How can f get this information, if its input is just a single word and an index?
This is possible because f isn't a pure function. The program has an internal state, which f can access and modify.
But f doesn't just have arbitrary read/write access to the state. Its access is constrained, in a very specific sort of way.
2. transformer-style programming
Let's get more specific about the program state.
The state consists of a series of distinct "memory regions" or "blocks," which have an order assigned to them.
Let's use the notation memory_i for these. The first block is memory_0, the second is memory_1, and so on.
In practice, a small transformer might have around 10 of these blocks, while a very large one might have 100 or more.
Each block contains a separate data-storage "cell" for each offset in the sequence.
For example, memory_0 contains a cell for position 0 ("Fido" in our example text), and a cell for position 1 ("is"), and so on. Meanwhile, memory_1 contains its own, distinct cells for each of these positions. And so does memory_2, etc.
So the overall layout looks like:
memory_0: [cell 0, cell 1, ...] memory_1: [cell 0, cell 1, ...] [...]
Our function f can interact with this program state. But it must do so in a way that conforms to a set of rules.
Here are the rules:
The function can only interact with the blocks by using a specific instruction.
This instruction is an "atomic write+read". It writes data to a block, then reads data from that block for f to use.
When the instruction writes data, it goes in the cell specified in the function offset argument. That is, the "i" in f(..., i).
When the instruction reads data, the data comes from all cells up to and including the offset argument.
The function must call the instruction exactly once for each block.
These calls must happen in order. For example, you can't do the call for memory_1 until you've done the one for memory_0.
Here's some pseudo-code, showing a generic computation of this kind:
f(x, i) { calculate some things using x and i; // next 2 lines are a single instruction write to memory_0 at position i; z0 = read from memory_0 at positions 0...i; calculate some things using x, i, and z0; // next 2 lines are a single instruction write to memory_1 at position i; z1 = read from memory_1 at positions 0...i; calculate some things using x, i, z0, and z1; [etc.] }
The rules impose a tradeoff between the amount of processing required to produce a value, and how early the value can be accessed within the function body.
Consider the moment when data is written to memory_0. This happens before anything is read (even from memory_0 itself).
So the data in memory_0 has been computed only on the basis of individual inputs like ("a," 2). It can't leverage any information about multiple words and how they relate to one another.
But just after the write to memory_0, there's a read from memory_0. This read pulls in data computed by f when it ran on all the earlier words in the sequence.
If we're processing ("a", 2) in our example, then this is the point where our code is first able to access facts like "the word 'Fido' appeared earlier in the text."
However, we still know less than we might prefer.
Recall that memory_0 gets written before anything gets read. The data living there only reflects what f knows before it can see all the other words, while it still only has access to the one word that appeared in its input.
The data we've just read does not contain a holistic, "fully processed" representation of the whole sequence so far ("Fido is a"). Instead, it contains:
a representation of ("Fido", 0) alone, computed in ignorance of the rest of the text
a representation of ("is", 1) alone, computed in ignorance of the rest of the text
a representation of ("a", 2) alone, computed in ignorance of the rest of the text
Now, once we get to memory_1, we will no longer face this problem. Stuff in memory_1 gets computed with the benefit of whatever was in memory_0. The step that computes it can "see all the words at once."
Nonetheless, the whole function is affected by a generalized version of the same quirk.
All else being equal, data stored in later blocks ought to be more useful. Suppose for instance that
memory_4 gets read/written 20% of the way through the function body, and
memory_16 gets read/written 80% of the way through the function body
Here, strictly more computation can be leveraged to produce the data in memory_16. Calculations which are simple enough to fit in the program, but too complex to fit in just 20% of the program, can be stored in memory_16 but not in memory_4.
All else being equal, then, we'd prefer to read from memory_16 rather than memory_4 if possible.
But in fact, we can only read from memory_16 once -- at a point 80% of the way through the code, when the read/write happens for that block.
The general picture looks like:
The early parts of the function can see and leverage what got computed earlier in the sequence -- by the same early parts of the function. This data is relatively "weak," since not much computation went into it. But, by the same token, we have plenty of time to further process it.
The late parts of the function can see and leverage what got computed earlier in the sequence -- by the same late parts of the function. This data is relatively "strong," since lots of computation went into it. But, by the same token, we don't have much time left to further process it.
3. why?
There are multiple ways you can "run" the program specified by f.
Here's one way, which is used when generating text, and which matches popular intuitions about how language models work:
First, we run f("Fido", 0) from start to end. The function returns "is." As a side effect, it populates cell 0 of every memory block.
Next, we run f("is", 1) from start to end. The function returns "a." As a side effect, it populates cell 1 of every memory block.
Etc.
If we're running the code like this, the constraints described earlier feel weird and pointlessly restrictive.
By the time we're running f("is", 1), we've already populated some data into every memory block, all the way up to memory_16 or whatever.
This data is already there, and contains lots of useful insights.
And yet, during the function call f("is", 1), we "forget about" this data -- only to progressively remember it again, block by block. The early parts of this call have only memory_0 to play with, and then memory_1, etc. Only at the end do we allow access to the juicy, extensively processed results that occupy the final blocks.
Why? Why not just let this call read memory_16 immediately, on the first line of code? The data is sitting there, ready to be used!
Why? Because the constraint enables a second way of running this program.
The second way is equivalent to the first, in the sense of producing the same outputs. But instead of processing one word at a time, it processes a whole sequence of words, in parallel.
Here's how it works:
In parallel, run f("Fido", 0) and f("is", 1) and f("a", 2), up until the first write+read instruction. You can do this because the functions are causally independent of one another, up to this point. We now have 3 copies of f, each at the same "line of code": the first write+read instruction.
Perform the write part of the instruction for all the copies, in parallel. This populates cells 0, 1 and 2 of memory_0.
Perform the read part of the instruction for all the copies, in parallel. Each copy of f receives some of the data just written to memory_0, covering offsets up to its own. For instance, f("is", 1) gets data from cells 0 and 1.
In parallel, continue running the 3 copies of f, covering the code between the first write+read instruction and the second.
Perform the second write. This populates cells 0, 1 and 2 of memory_1.
Perform the second read.
Repeat like this until done.
Observe that mode of operation only works if you have a complete input sequence ready before you run anything.
(You can't parallelize over later positions in the sequence if you don't know, yet, what words they contain.)
So, this won't work when the model is generating text, word by word.
But it will work if you have a bunch of texts, and you want to process those texts with the model, for the sake of updating the model so it does a better job of predicting them.
This is called "training," and it's how neural nets get made in the first place. In our programming analogy, it's how the code inside the function body gets written.
The fact that we can train in parallel over the sequence is a huge deal, and probably accounts for most (or even all) of the benefit that transformers have over earlier architectures like RNNs.
Accelerators like GPUs are really good at doing the kinds of calculations that happen inside neural nets, in parallel.
So if you can make your training process more parallel, you can effectively multiply the computing power available to it, for free. (I'm omitting many caveats here -- see this great post for details.)
Transformer training isn't maximally parallel. It's still sequential in one "dimension," namely the layers, which correspond to our write+read steps here. You can't parallelize those.
But it is, at least, parallel along some dimension, namely the sequence dimension.
The older RNN architecture, by contrast, was inherently sequential along both these dimensions. Training an RNN is, effectively, a nested for loop. But training a transformer is just a regular, single for loop.
4. tying it together
The "magical" thing about this setup is that both ways of running the model do the same thing. You are, literally, doing the same exact computation. The function can't tell whether it is being run one way or the other.
This is crucial, because we want the training process -- which uses the parallel mode -- to teach the model how to perform generation, which uses the sequential mode. Since both modes look the same from the model's perspective, this works.
This constraint -- that the code can run in parallel over the sequence, and that this must do the same thing as running it sequentially -- is the reason for everything else we noted above.
Earlier, we asked: why can't we allow later (in the sequence) invocations of f to read earlier data out of blocks like memory_16 immediately, on "the first line of code"?
And the answer is: because that would break parallelism. You'd have to run f("Fido", 0) all the way through before even starting to run f("is", 1).
By structuring the computation in this specific way, we provide the model with the benefits of recurrence -- writing things down at earlier positions, accessing them at later positions, and writing further things down which can be accessed even later -- while breaking the sequential dependencies that would ordinarily prevent a recurrent calculation from being executed in parallel.
In other words, we've found a way to create an iterative function that takes its own outputs as input -- and does so repeatedly, producing longer and longer outputs to be read off by its next invocation -- with the property that this iteration can be run in parallel.
We can run the first 10% of every iteration -- of f() and f(f()) and f(f(f())) and so on -- at the same time, before we know what will happen in the later stages of any iteration.
The call f(f()) uses all the information handed to it by f() -- eventually. But it cannot make any requests for information that would leave itself idling, waiting for f() to fully complete.
Whenever f(f()) needs a value computed by f(), it is always the value that f() -- running alongside f(f()), simultaneously -- has just written down, a mere moment ago.
No dead time, no idling, no waiting-for-the-other-guy-to-finish.
p.s.
The "memory blocks" here correspond to what are called "keys and values" in usual transformer lingo.
If you've heard the term "KV cache," it refers to the contents of the memory blocks during generation, when we're running in "sequential mode."
Usually, during generation, one keeps this state in memory and appends a new cell to each block whenever a new token is generated (and, as a result, the sequence gets longer by 1).
This is called "caching" to contrast it with the worse approach of throwing away the block contents after each generated token, and then re-generating them by running f on the whole sequence so far (not just the latest token). And then having to do that over and over, once per generated token.
303 notes · View notes
lowkeyfalleninlove · 3 months
Text
One thing i absolutely ADORE about captain swan is how open Emma becomes with Killian. Like you have this pirate that, for so long, only had the Jolly Roger and revenge to expect in his life, and when he meets Emma, is completely inthralled by her and is willing to show her that he will STAY. Not only that, he is determined too. And Emma, once she’s used to him being someone to rely on, comes to really be comfortable with him. And becomes a necessary part of her life. Then we have S5a with them in Camelot and they are just determined to hold onto each other while she’s experiencing the hardest times right now—
Anyway, all that to say that I completely adored them ever since me and my mom stumbled upon this crazy show and I miss them sm!
102 notes · View notes
riddlemefuckingthis · 3 months
Text
I want people to stop calling books problematic for talking about SA. It is not bad to talk about it. It is bad to endorse it, but talking about SA in itself is not evil. I think something a lot of survivors want is to talk about it. I want to talk about SA, I want to just want to talk about how autonomy is so easily striped from a person because it’s something that I know so well. I know the pain, I know the trauma that follows, I know the fight that survivors have to endure. That is why I love reading books that talk about this shit. Whether that is fantasy, contemporary, or just plan commentary. I want more people to talk about it and I think that when people immediately say that a book is “problematic” because it has that stuff in it; it’s just another way that is silencing survivors of SA.
Anyway, I’m thinking about Captive Prince rn.
115 notes · View notes
matznothere · 2 months
Text
when the dynamic is a red boy and a blue boy that are opposites and dont like each other at all in the beginning and at some point everyone says "oh yea theyre totally gay">>>> (bonus points if - ones from a small town/a farm and or have at least one space adventure together and or a sunset scene)
116 notes · View notes
mia-nina-lilly · 10 days
Text
Being C.S. Pacat's bitch means loving two blondes who, in real life, would be unbearable, but are just sweet babies in books
63 notes · View notes
attex · 4 months
Note
I love your UI design. Any thoughts on them like their interaction with others?
that innocence is pretty paralleled if we are being honest...
Tumblr media
My interpretation of UI is based on the idea of them just being attention and stimulation addicted... They were made like this on purpose, by the same group of engineers (not exactly the same persons though) as NSH, due to the seeming work efficiency in a more active and stimulation seeking personality. This is also why they are named ironically too, along with having a fake mouth! However, such a personality tends to give diminishing returns, especially when there isn't things to focus on constantly. Which is why UI leaned towards gossiping and messing around.
They don't do things out of malice, (the ordeal with 5P was not malicious lol) it's all just another little thing to pass the time. But they do feel guilty and awkward if pushed around a bit. They're more honest than they seem, they won't lie about their mean thoughts or opinions. They will lie if it causes more attention to be given to them or they dislike the way they're treated from their honesty; but this is all hard to achieve as they can get VERY stubborn. They're naturally curious in a gawking at things and not shutting up kinda way. If they see something weird, they will point it out and keep bringing it up to talk about it.
I interpret them to be the youngest, being built a short while after 5P. This is why they look the way they do, but they still have discerning traits due to the engineers that built them. Their design for the puppet should be obvious in the ways it's similar to how NSH's puppet is designed, at least I hope I managed to show that... I also imagine their structures have much bigger bio-engineering lab sections, not for actual production of purposed organisms but rather for experimenting with them and the like. Those two would be occupied with that often, along with their other duties. The small cloak, a lot of parts that light up to indicate status, fake mouth, more angular parts, focus on strip patterns, sturdier legs... Their cloak has patterns resembling rod cells in eyes, also!
As for their relationships with others in their group...
Tumblr media
LttM gets along well with them, which usually surprises outsiders. She knows they just need things to do but does get disappointed at their more reckless behavior. UI likes LttM for being a bit too lenient regarding things they do that they probably shouldn't, but besides that they do see her as a trustable friend albeit not taking her senior status too solidly.
Tumblr media
5P definitely doesn't enjoy having to interact with them at all after getting humiliated by them. He doesn't hate them or anything, he never did. While he wasn't surprised about them doing what they did, it still soured his view of them by a lot and feeling that many heavy emotions in one moment didn't help. Otherwise, he can't be bothered with them in general. UI sees 5P as an extremely difficult peer to mess with in any way, he is impatient and easily annoyed but his tendency to just cut things off makes anything silly near impossible. While they do find his issues interesting in a shallow way, a part of them secretly wishes to know more of him on a personal level... Most likely because he is the only one they've never gotten to engage with closely, their nosy interest in him got more blatant as time went on too.
Tumblr media
SRS actually enjoys talking to them a decent amount, though they can't help but feel like there is always a barrier of sorts in fully understanding and connecting with UI. UI finds SRS very amusing. Definitely their "favorite" in the group due to SRS' extrovertedness combined with that iconic tinge of obliviousness. UI has always enjoyed snooping in on SRS, especially when they talk to outsiders. SRS isn't fully aware of the extent of UI observing them like a weird animal, though...
Tumblr media
NSH is neutral, yet wary regarding UI. They both know how either one can behave pretty well. He still sees them as a friend though. He's the second person that tells UI to "behave" the most- but it isn't like UI can't snap back at him for being overly playful as well. UI is nearly the same way towards him. Both of them know of each other's mischievous attitude and that makes it difficult for them to mess with one another. They can get a bit too caught up in being silly if he eggs them on and vice versa… even if they don't fully notice NSH views them as acting more childish by a lot. CGW… I haven't thought of and characterized CGW well enough to say anything regarding them honestly… But the things I'm certain of are UI seeing CGW as being way too "put together" and unfun, because they act very proper in comparison to everyone else. That's more incentive to mess with them, though. CGW doesn't dislike UI or anything, but they see being closer friends with them as not entirely possible.
97 notes · View notes
celestialsonata7 · 2 years
Text
Are there any gifmakers out there that are willing to view my situation from an objective standpoint and tell me if I'm being stupid or not?
3 notes · View notes
mklinaaa · 10 months
Text
james st clair & violet ballard
Tumblr media
283 notes · View notes
antithcsis · 6 months
Text
if you’re looking to read more books, here are the ones i recommend:
- the darkness outside us by eliot schrefer
- dark rise by c.s. pacat
- the green creek series by t.j. klune (4 books)
- the all for the game series by nora sakavic (3 books)
- the raven boys series by maggie stiefvater (4 books)
they’re all amazing and have carved a hole into my chest and made a home there, i hold them very dear to my heart
89 notes · View notes
andrewminyardslawyer · 4 months
Text
This is 100% what James St. Clair looks like and no one can convince me otherwise
Tumblr media
105 notes · View notes
sir-fluffbutts · 4 months
Note
I have another question, hope I'm not bothering! Has there ever been/will there ever be merch of Kenny? I love that bitch ass motherfucker so much you have NO idea, I would commit a war crime for a pin or keychain of him.
no problem at all! 👍
----------------
imma be fully honest theres been some conplications with kenny specificaly but after 2 years i can fully say YES there WILL be merch of kenny (hopefully stickers,pins and acrilic charms) later on!!!!!
i really need to work on my site but im just ass at programs smh
81 notes · View notes
nateofgreat · 6 months
Text
C.S Lewis: Now for Calormene, I think I’ll draw inspiration from a few different sources from Middle Eastern history. But mainly from ancient cultures that practiced things like slavery. The religion I’ll base off old pagan beliefs from the time of the Bible, wherein bestial gods were worshiped and human sacrifice was still practiced. After all Tash will be my analogue for the Dev-
Woke fans: Oh my gosh! ISLAMOPHOBIA!?!?!?!?!?!??!?!? This violent cult reminds me of Islam! Which means Lewis hated Muslims!!!!
C.S Lewis: Did you miss the part where I gave Tash a bird head?
88 notes · View notes
gaelmeee · 6 months
Text
Tumblr media
James ✨️
98 notes · View notes