Tumgik
#it is fucking bizarre the way these people conduct themselves online
dennisboobs · 6 months
Text
i think everyone on sunnytwt needs to be sat down so i can explain to them what basic human empathy is. and then maybe i put them in a blender until they agree to write meta about the characters instead of whether or not charlie day got facial reconstruction surgery.
#ada speaks#u do not exist in a vacuum and your words have the capability to harm others#celebrities may not see your tweets but your balding transmasc mutual and your follower who feels self conscious about her nose will#it is fucking bizarre the way these people conduct themselves online#really. really fucking weird man#and then you see them acting like ppl are 'defending rich white men'#instead of taking issue with the actual shit theyre saying#whether or not you think rcg has been 'under the knife' or not#a) how is this any of your business. you are not entitled to this info nor do you have a free pass to criticize someone's personal choice#b) ask yourself why you feel you need to critique alleged surgeries and how they stack up to imposed conventional beauty standards#c) you do not *own* them. you can have opinions on your own attraction to them but#a person getting plastic surgery or hair plugs or whatever is up to them. not you. if it helps to make them feel better then who cares.#just because it doesn't make them attractive to YOU doesn't mean its okay to point and laugh#if a trans guy got top surgery and it was 'botched' would you act like they were stupid for getting it in the first place?#if a trans woman decided she wanted to surgically shave her jaw would you shame her for that?#it's their body. it's not yours.#for the record i don't believe any of them have gotten work done but think its a stupid thing to speculate on regardless#ive watched family members go through plastic surgeries of varying success. ive seen them get botox and hair plugs and everything#normal everyday people do it and it's not always about vanity#it can be for gender reaffirming reasons (and yes this includes when cis people do it) to alleviate dysphoria#trying to point out alleged surgical alterations made is just. gross#not to mention that holy shit MOST of the shit ppl are saying is like. age. different hairstyles. different facial expressions.#maybe if these people actually watched the show theyd be able to see the gang in action instead of staring at pics like spot the difference
37 notes · View notes
Text
I feel like so many times I see a post online and like, maybe I sorta agree with it, but it’ll be framed in such a bizarre way that I’m either unsure or kinda uncomfortable. I mean, as an obvious example, a lot of posts will be framed as “It’s really fucked up that you guys are saying this, please stop” and I’ll be like ?????? I’ve certainly never said that and I literally don’t know anyone who has. So I guess I agree but like...I don’t know, it feels weird to like, critique a view that I’ve never actually encountered. And that format feels very vulnerable to people, you know, literally just making up a guy to be mad at, and everyone else just jumping on the bandwagon and being like “yeah that sucks, fuck those people!” without ever experiencing those people/views for themselves because they literally don’t exist.
I don’t know, maybe it does make sense to kinda call people out sometimes, I’m not an expert on how to conduct discourse or anything. But I just kinda feel like things are usually more effective when they’re (1) written for a general audience and (2) mostly civil and polite rather than hostile and aggressive. And I understand the reasoning of like, “This is likely going to be seen mostly or only by my followers/this community” but like...things on the Internet are public, and I feel like it’s at least worth a passing consideration how someone outside of your immediate circle might perceive what you’re saying.
3 notes · View notes
jeremyau · 7 years
Link
The empathy layer
Can an app that lets strangers — and bots — become amateur therapists create a safer internet?
by Mar 2, 2017, 10:30am EST
Illustrations by Peter Steineck
In January 2016, police in Blacksburg, Virginia, began looking into the disappearance of a 13-year-old girl named Nicole Lovell. Her parents had discovered her bedroom door barricaded with a dresser, her window open. Lovell was the victim of frequent bullying, both at school and online, and her parents thought she might have run away.
On social media, Lovell posted openly about her anguish. On Kik, a messaging app, Lovell told one contact, “Yes, I’m getting ready to kill myself.” In another exchange, she grabbed a screenshot from a boy she liked who had changed his screen name to “Nicole is ugly as fuck.” She broadcasted these private interactions to the wider world by posting them on her Instagram, where she also snapped a photo of herself looking sad, adding the caption “Nobody cares about me.”
Starved for affection among her peers, Lovell sought it out online. Police found a trail of texts on Kik between Lovell and a user named Dr. Tombstone. Kik allows users to remain anonymous, and over the course of a few months, the conversation turned romantic. Tombstone’s real identity was David Eisenhauer, a freshman at Virginia Tech, five years older than Lovell. In a horrific turn of events, authorities say Eisenhauer lured Lovell to meet him, then murdered her.
According to Kik employees of the time, the tragedy was a moment of reckoning for the platform. In the beginning of 2016, the app laid claim to 200 million users, and 40 percent of teenagers in the US. Kik’s terms of service stated that anyone under the age of 18 needed a parent’s permission to use the app, but these rules were easily ignored. Because it allowed users to remain anonymous, a wave of negative press around Lovell’s murder painted Kik as a playground for predators. “It was, for the entire company, a shock,” says Yuriy Blokhin, an early Kik employee who left the company recently. “Everyone felt we had to do more, an increased sense of responsibility.”
Executives at Kik wanted a system to identify, protect, and offer resources to its most vulnerable users. But it had no way of knowing how to find them, and no system in place for administering care even if it did. Through their investors, Kik was put in touch with a small New York City startup named Koko. The company had created an iPhone app that let users post entries about their stresses, fears, and sorrows. Other users would weigh in with suggestions of how to rethink the problem — a very basic form of cognitive behavioral therapy. It was a peer-to-peer network for a limited form of mental health care, and, according to a clinical trial and beta users, it had shown very positive results. The two teams partnered with a simple goal: find a way to bring the support and care found on Koko to Kik users in need.
But as the two companies talked, a more ambitious idea emerged. What if you could combine the emotional intelligence of Koko’s crowdsourced network with the scale of a massive social network? Was there a way to distribute the mental health resources of Koko more broadly, not just in a single app, but to anywhere people gathered online to socialize and share their feelings? Over the last year the team at Koko has been building a system that would do just that, and in the process, create an empathy layer for the internet.
In 1999 Robert Morris, future co-founder of Koko, was a Princeton psychology major who got good grades but struggled to find direction — or a thesis advisor. “They didn't know what to do with me,” Morris told me recently. “I had a bunch of vague and strange research ideas and I would show up to their office with a bunch of bizarre gadgets I had hacked together: microphones, sensors, lots of wires.”
Morris finally found a home at the MIT Media Lab. A budding coder, Morris spent much of his time on a site called Stack Overflow, a critical resource for programmers looking for help on thorny problems. Morris was blown away by the community’s ability to help him on demand and free of charge and wondered if that crowdsourced model could be applied to other personal challenges. “I struggled with depression on and off for much of my life, but my early time at MIT was especially difficult,” he recalls. “I liked StackOverflow, but I needed something to help me 'debug' my brain, not just my code.” For his thesis project, he set out to build just that.
Based on the peer-to-peer model of StackOverflow, Morris’ MIT thesis, named Panoply, offered two basic options: submit a post about a negative feeling or respond to one. To quickly build and test the platform, Morris needed users. So he turned to Mechanical Turk, an online marketplace where anyone can crowdsource simple tasks for a small payment.
Morris taught MTurk workers a few basic cognitive behavioral techniques to respond to posts: how to empathize with a tough situation, how to recognize cognitive distortions that amplify life’s troubles, and how to reframe a user’s thinking to provide a more optimistic alternative. The only quality control Morris put in place was basic reading and writing comprehension. For each completed task the MTurk workers were paid a few cents.
Using an online ad for a stress-reduction study, Morris recruited a few hundred volunteers in order to fully test the system. Like the MTurk workers, the subjects were given some brief training and set loose to post their issues and reframe the issues of others. This random assemblage of people was about as far as you could get from trained and expensive therapists. But in a clinical trial conducted along with his dissertation, Morris found that users who spent two months with the Panoply system reported feeling less stressed, less depressed, and more resilient than the control group. And the most effective help was given not by the paid MTurk workers, but by the unpaid volunteers who were themselves part of the experiment.
It was a single study and has not yet been replicated, but it gave Morris confidence that he was onto something big. And then a stranger came calling. “A week after I defended my dissertation, I got several manic emails out of the blue from some guy named Fraser,” Morris said. “It was immediately apparent that he had an incredibly deep understanding of the problem.”
At the same moment that Morris was building Panoply at MIT, Fraser Kelton and Kareem Kouddous, a pair of tech entrepreneurs, had been pursuing the same idea. The pair had hacked together their own version of a peer-to-peer system for therapy. They recruited participants off Twitter and put them into WhatsApp groups, then had one group teach the other group the basics of cognitive behavioral therapy. “At the end of testing, 100 percent of helpers thanked us for the opportunity to participate and asked if they could keep doing it,” said Kelton. “When we asked why, they all said something along the lines of "for the first time since I finished therapy I found a way to put 5 or 10 minutes a day toward practicing these techniques."
A month later Kelton came across Morris’ work and emailed him immediately. “This is embarrassing, but I think I emailed him two or three times that night,” says Kelton. “We thought we had a clever idea, but he had taken it and jumped miles ahead of where our thinking was, run a clinical trial, gotten results, and defended a dissertation.” Within a few weeks Kelton, Kouddous, and Morris had mocked up a wire frame of an app that became the blueprint for Koko. They called the company Koko because the service is meant to help users by showing them different perspectives. Koko backwards is “ok ok.”
Fraser, who knew the startup scene, approached investors. “It seemed to us that there was a possibility that a peer to peer network in this space was kind of a perfect application,” says Brad Burnham, a managing partner from Union Square Ventures. The firm had previously invested in a number of startups that relied on networks of highly engaged users: Twitter, Tumblr, Foursquare. But Burnham had never seen something quite like Koko before. When Koko users added value to the network by rethinking problems, they actually provided value to themselves, by practicing the core techniques of cognitive behavioral therapy. “By helping others, they were helping themselves, and that seemed like a great synergy," said Burnham. In January of 2015 Union Square Ventures, along with MIT’s Joi Ito, invested $1 million into Koko. Less than a month later, the company launched its iOS in beta.
The first time Zelig used Koko, she was sitting in a parking lot waiting to pick up one of her kids from a summer program. She had downloaded the app in search of emotional relief. Her son, an intelligent and outgoing boy with Asperger’s syndrome, seemed to have no place of acceptance outside of home, and was facing the increasing isolation often prevalent in the lives of teens on the autism spectrum. Her younger daughter had just been diagnosed with Obsessive-Compulsive Disorder.
“I have a special needs kid and high needs kid. My life is not typical,” Zelig explained in a phone call. “It’s pretty stressful and it’s always on. You make attempts to do your best and things don’t work, which is really scary.” She asked that we only use her Koko screen name in this story to preserve her family’s privacy. “My kids were struggling mightily, and there just wasn’t a way for me to see anything that could possibly make it better.”
The Koko app offered Zelig two choices. She could write a post laying out her troubles and share it with everyone who opened the app. They would give her advice on how to rethink her problems — not offer a solution, but rather suggest a more optimistic spin on the way she saw the world. But Zelig didn’t feel ready to open up about her own struggles. “It was hard for me to take the big things going on in my life and make them the size of a tweet, to get to the core. It was hard to turn loose those emotions.”
Instead, Zelig started reading through posts from other users. The Koko app starts users off with a short tutorial on “rethinking.” The app explains that rethinking isn’t about solving problems, but offering a more optimistic take. It uses memes and cartoons to illustrate the idea: if you choose the right reframe, a cute puppy offers his paw for a high-five. The app walks new users through posts and potential reframes, indicating which rethinks are good and which aren’t. The tutorial can be completed in as little as five minutes.
Once users finish the tutorial, they can scroll through live posts on the site. Despite the minimal training, the issues they are confronted with can be quite serious: an individual who is afraid to tell her family that she’s taking anti-depressants because they might think she’s crazy; a user stressed from school who believes “no one actually likes the real me, and if they see it, they will hate me”; a user with an abusive boyfriend who has come to feel “I am a failure and worth being yelled at.” I walked a friend through the tutorial recently, and they were shocked by how quickly Koko throws you into the deep end of human despair.
Koko lets you write anything you want for a rethink, but also offers simple prompts: “This could turn out better than you think because…,” “A more balanced take on this could be…,” etc. The company screens both the posts and rethinks before they become public, attempting to direct certain users to critical care and weed trolls out of the system. Originally, this was accomplished with human moderators, but increasingly, the company is turning to AI.
Accepting and offering rethinks is meant to help users get away from bad mental habits, cycles of negative thought that can perpetuate their anxiety and depression. Over the next few months, Zelig found herself offering rethinks of other Koko users almost every day. “Having it in your pocket is really good. All of sudden it would hit me what I needed say in the reframe, so I would pull my car over, or stand in the produce aisle.”
In the process of giving advice Zelig felt, almost immediately, a sense of relief and control. She began to recognize her own dark moods as variations on the problems she was helping others with. Zelig says the peculiar power of Koko is that by helping others, users are able to help themselves. She eventually got around to sharing her issues, but always felt that “I was more helped by the reframing action than I was by the posting. It trained me to be able to see my world that way.”
The last few years have seen an explosion of startup and mobile apps offering users mental health care on demand. Some, like MoodKit and Anxiety Coach, offer self-guided cognitive behavioral therapy. Others, like Pacifica, mix self-guided lessons with online support groups where users can chat with one another. Apps like Talkspace use the smartphone as a platform for connecting patients with professional therapists who treat them through calls and text messages.
For the moment, Koko is one of just a few company built primarily around a peer-to-peer model. Its best analog might be companies like Airbnb or Lyft. Why pay for a hotel room or black car when the spare apartment or neighbor’s car is just as good? Why pay for therapy when the advice of strangers has proven to be helpful and free?
Studies have found that cognitive behavioral therapy can be as effective at treating depression and anxiety as prescription drugs. Since the 1980s, people have been practicing self-guided cognitive behavioral therapy through workbooks, CD-ROMs, and web portals. But left to their own devices, most people don’t finish courses or stop practicing fairly quickly.
Koko is still a tiny company, staffed by the three co-founders and one full-time employee, all based out of New York City. To date, over 230,000 people have used Koko, and more than 26 million messages have been sent through the app over the last six months. Many, like Zelig, have used it on a daily basis for more than a year. But like so many mobile apps these days, Koko has struggled to attract a large following.
The Koko team always knew it would be difficult to charge users for the app, or to make money advertising to a relatively small number of anonymous users. It was at this critical juncture that the team from Kik came calling. After the murder of Nicole Lovell, Kik reached out to its investors at Union Square Ventures for advice. Burnham connected Kik with Koko, setting in motion an entirely new direction for the young company.
When users sign up for Kik, the first contact added to their address book is a chatbot. It answers questions about the service, tells jokes, and posts updates about new features. “A few months before meeting with Koko, we noticed something interesting happening with the Kik bot,” said Yuriy Blokhin, the former Kik engineer who helped forge the partnership with Koko. “People were not only talking to it the way it was meant to be, as a brand ambassador, but also sometimes people were mentioning they were depressed, concerned about their parents getting a divorce, or being unpopular at school.”
Kik didn’t know how to respond to these kinds of emotional confessions, but Koko did. It had millions of posts, carefully labeled by workers from Mechanical Turk to describe the type of problem they represented. It used that database to train artificial intelligence that could respond to posts sent to a chatbot. If the content of a message was critical — defined by Kokobot as being a danger to themselves or others — it would connect users with a service like Crisis Textline; if the issue was manageable, the bot would pass the person on to Koko users; if it was a troll, the bot would hide the post. This is the same AI approach Koko now uses to classify posts on its peer-to-peer network.
Once that approach proved successful, Koko went one step further. If a user posted about a stress Koko had a highly rated response for — a sick family member, a difficult test at school, a spat with a significant other — the chatbot would automatically offer up that rethink. The AI was now acting as a node in the peer-to-peer network.
Beginning in August 2016, any user on Kik could share their stress with the Kokobot. Most received a reply in just a few minutes. Working with Kik made Koko realize how big the business opportunity was. “Do a search on Twitter, Reddit, Tumblr, any social network, and you will find a cohort of users reaching out into the ether with their problems,” said Kelton. The team realized that if they could train an AI to identify and respond to users sharing emotional stress, they might also be able to train algorithms to automatically detect users who were at risk, even if they hadn’t reached out. Koko was transforming itself into an intervention tool, scanning platforms and stepping in on its own volition. Koko hopes to provide these tools to online communities for free, using the feedback to train an AI with services it can one day sell to digital assistants like Siri and Alexa.
The move into detection and intervention, however, has been complicated. This past January, the team set up the Koko bot on two Reddit forums r/depression and r/SuicideWatch. It scanned incoming posts, and messaged several users offering help.
The response wasn’t what Koko engineers had expected: the community was outraged.
“I feel deeply disturbed that they would use a bot to do this,” wrote one user. “Disgusting that assholes would try and take advantage of people,” wrote another. The moderator of the two forums set up a warning advising users to ignore Koko’s chatbot. “I have to say that the technology itself looks like an interesting idea,” the moderator wrote. “But if it's in the hands of people who behave in this way, that is incredibly disturbing.” The Verge reached out to both moderators and users who left angry comments about Koko, but did not hear back.
The Koko team acknowledged it made a mistake by allowing its chatbot to send messages on Reddit without warning, and not educating users and moderators about who they were and what their goal was. But Kelton believes that the feedback from users who did interact with the bot on Reddit shows the system can do real good there. “One mod bent out of shape on how we handled the launch vs. many at-risk people helped in a way that they appreciated,” was a trade-off Kelton could live with. “Helping mods understand and embrace the service is a containable problem, one that we're already having good success with.”
In January 2017, top officials from the US military met with executives from Facebook, Google, and Apple at the Pentagon. The topic was suicide prevention in the age of social media. The federal government considers the subject a top priority, as suicide has become the leading cause of death among veterans. For the tech companies, the problem is wide ranging. Among teenagers in the United States, most of whom spend six and a half hours each with their smartphones and tablets daily, suicide is the second leading cause of death.
In attendance was Matthew Nock, a professor of psychology at Harvard and an expert in suicide prediction and prevention. When it comes to using technology for detection and intervention, “the consensus in the academic community is there is great potential promise here, but the jury is still out,” says Nock. “Personally I have seen a lot of interest in people using social media and the latest technologies to understand, predict, and prevent suicidal behavior. But so far many of the claims have outstripped the actual data.”
Despite those concerns, Nock is interested in what companies like Koko might offer. “We know that cognitive behavioral therapy is effective for treating people with clinical depression. There is not enough cognitive therapy to reach everyone who needs it.” Koko provides people with the simple tools they can use to help themselves and others. “These people aren’t clinicians, they have been trained in the basics, but for scaling purposes, I think it’s what we can do right now.”
The scalability of tech makes it an alluring tool for mental health — but the business comes with unique risks. “Everyone wants to be the Uber of mental health,” says Stephen Schueller, an assistant professor at Northwestern University who specializes in behavioral intervention technologies. “The thing I worry about is, unless you have a way to make sure the drivers are behaving appropriately, it’s hard to make sure people are getting quality care. Psychotherapy is a lot more complicated than driving a car.”
Koko’s experience with Reddit wasn’t the first mishap to befall company trying to scale mental health, an industry traditionally made up of heavily regulated, sensitive, one-on-one clinical relationships across an online community. Those challenges were made apparent in the case of Talkspace, where therapists didn’t feel they were able to warn authorities about patients who may have been a danger to themselves or others. That led some therapists to abandon the platform. Samaritans, a 65-year-old organization aimed at helping those in emotional distress, released an app in 2014 called Samaritan Radar. It attempted to identify Twitter users in need of help and offer assistance. But due to the public nature of the interaction, the warnings ended up encouraging bullies and angering users who felt their privacy had been invaded.
The ethics of using of artificial intelligence for this work has become a central question for the industry at large. “The potential demand for mental health is likely to always outstrip the professional resources,” says John Draper, project director at the National Suicide Prevention Lifeline. “There is increasingly a push to see what can technology do.” If AI can detect users at risk and engage them in emotionally intelligent conversations, should that be the first line of defense? “These are important ethical questions that we haven’t answered yet.”
In a recent manifesto on the state of Facebook, CEO Mark Zuckerberg noted that as people move online, society has seen a tremendous weakening of the traditional community ties that once provided mental and emotional support. To date, creating software that restores or reinforces those safeguards has been a reactionary afterthought, not an overarching goal. Systems designed to foster clicks, likes, retweets, and shares have become global communities of unprecedented scale. But Zuckerberg was left to ask, “Are we building the world we all want?”
“There have been terribly tragic events -- like suicides, some live streamed -- that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more,” Zuckerberg wrote. “Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review.” In early March it was reported that Facebook had begun testing an AI system which scanned for vulnerable users and reached out to offer help.
The goal for Koko is the same, but distributed across any online community or social network. Its AI hopes to reach vulnerable users, people like Nicole Lovell, who are posting cries for help online, searching for an empathic community. On a recent afternoon I opened the Koko app, and spent an hour scrolling through a litany of angst: not having the money to complete school, feeling obsessed with an older married man, overwhelmed at the prospect of caring for sick relatives who can no longer remember your name. Beneath each post, three or four users had suggested rethinks, blueprints for coping that users could learn from.
For people who are suffering, knowing that others are in pain, and that they can do something about it, is one way of healing themselves. “Something that caught me right away and kept me coming back to the app again and again was the amazing feeling of hope,” said Zelig, when I emailed her recently to ask a few questions about Koko. “That regardless of all the crap that seemed to be happening in my life, that I could still be of help to someone and could take a positive action.”
Zelig’s kids, like most teenagers, have become keenly interested in what keeps their mother occupied on her smartphone. “They see me typing away and want to know what I’m doing,” Zelig explained. “I’ll ask them, do you think this is a reframe? How would you do it? It was cool, because it’s a puzzle we solve together. What is the critical thing this person was dealing with? [It’s] an emotional, social puzzle.”
A year and a half after she downloaded the app, Zelig still uses it almost every day, but she doesn’t consider herself to be in a state of crisis anymore. She wasn’t sure how she felt about Koko using chatbots and AI to reach out to people who had never heard of the service. At first she told me that if a chatbot had approached her out of the blue, she would have ignored it. But she wrote back later to say that, if these technologies mean more people find their way into the Koko community, she’s in favor. “Life really had me and our family by the throat there for a while,” she told me. “Koko was part of what gave me the ability to see a way through to the other side.”
Illustrations by Peter Steineck
0 notes
kivablog3 · 6 years
Text
An impeachment that can’t check its own privilege, and more word games
DC NerdWatch for Friday July 27th 2018 / Midterms in 103 Days
Impeachment: The Privileged Resolution that Wasn’t Privileged
Mark Meadows and Jim Jordan filed articles of impeachment this week against Rod Rosenstein (am I the only one who thinks all this alliteration is vaguely comic-bookish? You know, Clark Kent, Lois Lane, Lana Lang, Peter Parker, Stephen Strange, that kind of thing). Only they said they weren’t filing it “as a privileged motion,” so off it goes into some committee in the Neutral Zone, never to return. This makes no sense, because under House of Representatives Rules of Order an impeachment resolution takes precedence -- it’s privileged, in other words -- over any other motion except the one motion that’s always in order, a motion to adjourn.
This all makes so little sense that I haven’t even looked this up online, I’ve already lived through a couple of presidential impeachments and I remember. This is a quirk of the H. of Rep. Rules, because impeaching any criminal serving in Federal office is urgent, they decided once, so urgent that all other business may be safely put aside. 
But they claimed their motion wasn’t privileged -- I don’t know how they did this other than by simply asserting that it was -- and it was sent off to die by the actual Speaker with a scoff and a sneer. This doesn’t mean impeachments don’t usually go to the Judiciary Committee, they do, but in theory they don’t have to. If something is privileged, then anyone can call the question on it and demand a vote; and then Congressman Shirtsleeves and his Caucus of Doom would be quickly revealed for what they are, maybe three dozen fanatics who will happily follow their leaders off the cliff into the icy waters of the fjord, there to drown pathetically.
If Congressman Shirtsleeves wants to pretend-run for Speaker, he can’t have his little dog-wagger’s caucus exposed as making up, at best, maybe 12 percent? Google doesn’t want to actually tell me, it wants to teach me fucking arithmetic. Not. Now. 
But howsoever large it may be, this dog must be sore from all the wagging it’s had to put up with from the Tail Caucus. There’s a small moderate edge that gets smaller every two years, and then there’s a mass of incoherent conservatives in the “middle” of the party who aren’t actually flat-out bizarre themselves, but whose voters mostly admire that kind of thing and therefore voted for Agent 45, because he does bizarre so well. 
And he does privileged very well, of course. A lot of people enjoy watching this sort of thing on TV. And the show is still pulling in the viewers, and isn’t that what he really understands? The more privilege you have, the less likely it is to even occur to you to check it. But sometimes, in politics, you have to check it and then say it isn’t there.
When the Preznit Tweets in ALL CAPS, Is the Policy Also All Caps? And WTF Does That Even Mean?
Back when this shitshow was just getting under way, in January of 2017, I heard an interesting segment on NPR. The problem was that anything and everything the president says in public is, ipso facto, a statement of the official policy of the United States. That’s why normal presidents -- defined here, very loosely, as the first 44 presidents -- were almost always careful not to say something stupid in public, however tempting it might feel at the time. 
People in DC were seriously worried: are tweets going to be considered policy statements? If he announces some sudden policy zigzag, like embracing Russia and spurning Montenegro in a furious four a.m. fusillade, does that become the official policy zigzag of the Untied States? No interagency process, no levels of input, no feedback, no attemtpt to avoid doing something stupid in public, since that’s what he does. It’s how he got so close to being elected president. All we get from those who choose to work for him are brief texts leaking things to reporters as they go hurrying along after him, trying to keep up with the Preznit and scoop up his messes at the same time.
Yes, of course, is the short answer. Those who oppose him oppose America, and are enemies of the people. Show trials to follow. He may not know what the fuck he even thinks he’s doing, and he’s already tanked the dominant sectors of the ag economies in half a dozen states (Iowa = soybeans, Maine = lobsters, Maine and Massachusetts = cranberries, etc.) with his idiotic garbled version of a 20th century trade war, but he has the phone in his hands, so he has the power, and his minions try to keep up and pretend all this is Normal. It’s the new abnormal. It’s the way we live now. __________
I have a submission package I’m working on but I can’t only write about one thing all the time, and I have a few days left in July. Usually I don’t write about "mainstream” politics for public consumption, I just inflict it on family and friends. But, not to belabor the point, this is not a normal midterm, where picking up maybe 20 seats but not quite taking the House is seen as a respectable result, and then they get ready for more respectable results in 2020 by attacking the left wing of the party and blaming the shortfall on them. 
If we the people of the Untied States can’t flip the House -- picture 218+ Democrats picking up a (small) house and flipping it over -- then we will be stuck with Mad King Don reigning unchecked another two years. They’ll lock in every advantage they’ve gotten a grip on and reach for more, and no one will investigate anything. He might even feel confident enough to fire Jeff Secessions and Rod Rosenstein. Then he’ll finally get to hire people who are “loyal” to run DOJ, and they’ll conduct a thorough purge at Justice like the one they’re doing now at EPA. Then they may start testing things like emergency executive orders, to see what they can get away with once the snowflakes are gone. Hearings and trials are so tedious, especially when you already know who’s guilty. It worked in England for the Stuarts, for a while anyway.
And above all they will dig in for the 2020 Census around the country: first warp the survey, then fiddle the numbers, so they can twist the outcome in the state legislatures that will sit in 2021 with slash-&-burn gerrymandering, now that the Supremes have made clear they won’t do anything to stop that kind of thing. We have to take back state legislatures, they are as important as the US House, almost more so since the redistricting fights will be carried on by the winners of the 2018 and 2020 elections.
The election is in 103 days.
0 notes