8 stories
·
1 follower

I wish I could stop writing about AI

1 Share

I Wish I Could Stop Writing About AI

Recently, a bunch of folks on the fediverse shared what sounded like an interesting post, claiming that the environmental impacts of generative AI such as LLMs is overstated, especially when considering ChatGPT-style chatbots.

Andy Masley: Why using ChatGPT is not bad for the environment - a cheat sheet

After all, I've frequently argued that, at least in its modern incarnation, AI is a climate-destroying disaster. If I'm wrong in that, or even just possibly wrong, I'm obligated to at least do the due diligence to make sure I have my facts in order. Notably, that's not the same as "we should listen to both sides," fuck that noise. It's the basic admission that empiricism and rationality aren't foolproof, and it's possible to erroneously believe factual claims that wind up not being true.

Put differently, if someone claims that trans folks shouldn't exist, I don't owe it to anyone to hear out the details on that — I'm fully justified in telling them to fuck off right from go. On the other hand, if someone says "hey, trickle-down economics actually works, and here's some concrete data that proves it," I'll still almost certainly be justified in telling them to fuck off, but due diligence at least demands I understand where they're wrong.

In practice, I have limitations on my time, emotional energy, and just sheer willingness to engage with the same arguments over and over again. That is perfectly rational, I will posit, as a policy, even if it doesn't yield perfectly rational fact claims. So, in practice, I don't listen to the nerdier kinds of fascists claiming again and again to have proven that Reagan didn't do anything wrong. I'm not missing much.

With this post, though, arguments about generative AI and energy usage are about 35 years less mature, so I found myself clicking through to read the post. As one might expect from the title of my own post, and the fact that I've not done an about-face in public, admitting how wrong I was all this time, the post was more fact-shaped than factual. As a result, instead of writing what I'd like to write this afternoon, I find myself deep within my Someone Is Wrong on the Internet mode. Alas.

xkcd: Duty Calls

Let's Get This Out of the Way

Before I get into the nitty gritty of Masley's article, let me start by laying out a few basics about generative AI first. It's helpful in understanding what factual claims and objections are even at play here, and which are not under further contention at the moment. Indeed, as Masley himself says:

> This post is not about the broader climate impacts of AI beyond chatbots, or about whether AI is bad for other reasons (copyright, hallucinations, job loss, risks from advanced AI, etc.).

First and foremost, the term "AI" is itself rather confusing, and purposefully so! AI as a term goes back much, much further than the current hype cycle, and has expanded and contracted over the intervening years. Different subfields have adopted the term in different ways, too, confusing the issue further. For instance, game developers have historically used the term "AI" to refer to the set of programmed behaviors used to create the illusion that in-game characters are real in some limited sense. That is, from the perspective of video games, AI is almost a theatrical term.

It's of immense profit for the purveyors of generative AI products such as ChatGPT, Microsoft's various Copilots, Google Gemini, etc. to build on the long history of AI as a term to lend the latest round of bullshit some credibility. It's kind of like how whenever you point out that the police should get less funding due to all the violence and corruption, people point to police directing traffic. Yeah, that's good, but it didn't take having someone with a gun and a nearly unlimited license to kill showing up in the middle of an intersection. Indeed, quoting Masley again:

> The services using 97-99% of AI’s energy budget are (roughly in order): > > - Recommender Systems - Content recommendation engines and personalization models used by streaming platforms, e-commerce sites, social media feeds, and online advertising networks.

Recommender systems have existed for much longer than generative AI in its present form. The infamous Netflix Prize, designed to improve the performance of recommendation systems at the time, dates all the way back to 2006. Conflating generative AI and restricted Boltzmann machines augmented with decision trees is misleading at best.

Wikipedia: Netflix Prize

Koren 2009: The BellKor Solution to the Netflix Grand Prize

To be clear, then, when I use the term "AI" in this post, I intend it in a narrower sense than Masley's use of the term, but closely in line with his focus on ChatGPT and similar chatbots that offer an interface to large language models. That itself is even a little vague, but such is the difficulty of trying to discuss the ever-shifting marketing claims made by giant tech companies.

With the term established, it's also critical to be clear about the dangers presented by AI (again, now in the narrow chatbot-like LLM sense, rather than the broader historical sense). AI products are trained on the uncompensated labor of millions, are promoted and used to further devalue and deskill jobs, and perhaps most chillingly, are intrinsically connected to the global rise of fascism. As I've argued before:

> I used to see the AI bubble and trans rights as distinct issues. I no longer do. The fascist movement in tech has truly metastasized, as evidenced by Elon Musk's personal coup, his endless supply of techbro supporters, tech companies' eagerness to axe DEI programs once Trump gave them an excuse, erasure of queer lives from tech products, etc. > > To the extent that AI marketing is an attempt to enclose and commodify culture, and thus to concentrate political power, I see it as a kind of fascism.

@xgranade@wandering.shop: "I used to see..."

My argument there is an echo of and has been echoed by others; I am far from alone in drawing a connection between eugenics, fascism, and AI.

> The AI projects currently mid-hype are being developed and sold by billionaires and VCs with companies explicitly pursuing surveillance, exploitation, and weaponry. They fired their ethics teams at the start of the cycle, and diverted our attention to a long-term sci-fi narrative about the coming age of machines – a “General Intelligence” that will soon “surpasses” human ability.

Miriam Eric Suzanne: Tech Continues to be Political

> I mean, every part of this is really upsetting to me, and I think that this notion of post humanity, which often goes along with AI, does embed within it some really troubling ideas about like the innateness of intelligence, whereas I think intelligence is in large part, a learned skill and it's a product of early childhood education and the kind of habits that you cultivate throughout your life of questioning things. But, you know, again, I'm just gonna restate, we don't know what intelligence is. We don't know how much of it is genetic or what kind of factors shape it. We can't even define it. And yeah, it's just… This notion that we can become super beings inevitably just goes to some really dark, really racist places.

Our Opinions are Correct, Episode 125: Silicon Valley vs. Science Fiction: ChatGPT

With that in mind, I want to revisit Masley's introduction before moving on to his factual claims:

> This post is not about the broader climate impacts of AI beyond chatbots, or about whether AI is bad for other reasons (copyright, hallucinations, job loss, risks from advanced AI, etc.).

That phrase, "risks from advanced AI," should raise some eyebrows given all of the above. Indeed, Masley links to an 80,000 Hours post about "AI catastrophe" as a citation for said "risks."

80,000 Hours: Preventing an AI-related catastrophe

For the unfamiliar, 80,000 Hours is a nonprofit dedicated to advancing the cause of Effective Altruism (EA), a philosophy sometimes derided as an "ethics of the rich."

PhilosophyTube: The Rich Have Their Own Ethics

80,000 Hours was indeed founded by none other than William MacAskill, the founder of the EA movement; it's no secret that I do not think very highly of him nor his apologia for the likes of Sam Bankman-Fried, one of his biggest supporters. But that's somewhat beyond the point here. Rather, the link between EA and a related movement known as longtermism is critical to understanding why including a link to 80,000 Hours is as suspect. While EA teaches that the rich should be as rich as possible, as the rich are the best arbiters of how to allocate societal resources for human flourishing, longtermism teaches that the foibles that we may suffer now are nothing compared to the risks of not developing general AI in the far, far future. Reading the post cited by Masley, then, we see that much of the risk highlighted by 80,000 hours consists of AI not being "aligned," a framing deeply rooted in EA and longtermism.

> When we say we’re concerned about existential catastrophes, we’re not just concerned about risks of extinction. This is because the source of our concern is rooted in longtermism: the idea that the lives of all future generations matter, and so it’s extremely important to protect their interests. > > This means that any event that could prevent all future generations from living lives full of whatever you think makes life valuable (whether that’s happiness, justice, beauty, or general flourishing) counts as an existential catastrophe.

As an aside, it's notable what risks 80,000 Hours does not include in the risks presented by AI. In their page on longtermism, they note that while climate change is very serious, it's not likely an extinction-level risk by their accounting:

> Climate change, for example, could potentially cause a devastating trajectory shift. Even if we believe it probably won’t lead to humanity’s extinction, extreme climate change could radically reshape civilisation for the worse, possibly curtailing our viable opportunities to thrive over the long term.

By comparison with supposedly existential risks such as AI becoming sentient in ways that don't benefit capital labor, responding to the climate emergency is more of a nice-to-have. I don't agree with those priorities, to say the least.

Notably, all of the above stands whether or not ChatGPT's environmental impacts are as benign as Masley claims, a point that he acknowledges in his very introduction, even if he cites the philosophical basis for modern tech-funded fascism in doing so (that the article is posted to Substack does not escape me). What, then, do we learn if Masley's claims hold? Not nothing, to be sure. As I mentioned at the get-go, acting in concert with the facts is pretty damned important. At the same time, our course of action should be clear regardless of the climate impacts of ChatGPT et al.

Joel Pett: What if it's a big hoax and we create a better world for nothing?

I argue that we shouldn't be tolerant of sloppy factual claims, let alone lies and disinformation, but we also need to keep perspective: it's worth opposing fascists even if they don't pollute that much, and it's worth protecting labor even if the externalities of doing so are fairly negligible. That is, I'll warrant, a somewhat subtle and nuanced position, but hey. This is my blog, so I get to have opinions that take more than a sentence or two to express!

How Much Energy Does a Prompt Cost?

Much of Masley's claims derive from a single central claim: that the cost of a single prompt to a typical ChatGPT-like chatbot is upper-bounded by 3 Watt-hours (3,600 Joules), or about the full capacity of a AA-sized Li-ion battery, and is likely to be far less than that overestimate.

Wikipedia: AA battery

I choose that comparison to be somewhat inflammatory, of course. Masley chooses other comparisons to illustrate his point, including "running a microwave for 10 seconds." The daily life of a typical North American includes energy references at massively different scales, offering a lot of opportunity to choose more or less innocuous-sounding comparisons. A typical plugin hybrid car may have a capacity of about 40 kWh, or 144,000,000 Joules; that's roughly 40,000 ChatGPT prompts using Masley's claimed figure, so who cares, right?

There's three problems with this analysis, though: we don't actually know how much energy a ChatGPT prompt costs, that figure doesn't include the cost of collecting data for and training an AI model, and own its own, that analysis doesn't suggest any particular course of action.

For the first, Masley cites only Epoch AI, an AI industry research institute. Of particular note, Epoch is a group with a vested financial interest in reaching a particular set of conclusions, namely that AI is an industry that should see further investment. While that conflict alone doesn't invalidate their conclusions, taken together with the scarcity of corroborating data, it warrants some modicum of suspicion. Where do they get that 3 Watt-hour upper bound, then? Epoch proceeds in three steps: estimating the number of parameters that must be evaluated for a given chunk of output, estimating the number of expected chunks of output, and estimating the energy cost per parameter evaluation. This strategy necessarily involves making a significant number of assumptions, such as the duty cycle of GPUs in a data center, the ratio of average to peak dissipated power, and so forth. Epoch's assumptions may or may not be reasonable, but without more transparency into how ChatGPT and other chatbots are implemented, their assumptions are just that: assumptions.

Epoch, and by extension Masley, claim that 0.3 Watt-hours (360 Joules) per prompt would be a more accurate estimate, contrasted with those estimates obtained by de Vries in 2023.

de Vries: The growing energy footprint of artificial intelligence

Both the Epoch and de Vries estimates largely exclude energy costs other than the GPUs used in evaluating LLM outputs; a more comprehensive estimate would also include other costs such as CPU, storage, networking, and cooling. Regardless, arriving at a concrete estimate without access to data currently withheld by AI companies is difficult at best. Indeed, as de Vries has noted:

> This energy calculation felt like “grasping at straws”, de Vries says, because he had to rely on third-party estimates that he could not replicate. And his numbers quickly became obsolete. The number of servers required for an AI-integrated Google search is likely to be lower now, because today’s AI models can match the accuracy of 2023 models at a fraction of the computational cost, as US energy-analyst firm SemiAnalysis (whose estimates de Vries had relied on) wrote in an e-mail to Nature.

Nature: How much energy will AI really consume? The good, the bad and the unknown

Masley by and large shrugs off this problem, and assumes that the 3 Watt-hour per prompt figure is good enough:

> So I disagree that this is vibes and guesswork. It’s very uncertain! But people more knowledgeable than me have tried their best to put a number on the energy and water involved, so it’s more than a random shot in the dark. I’ve tried to defer to where all their guesses are. Almost all conversations about individual climate impacts from using ChatGPT seem to assume the same numbers are correct, so this is what’s being debated. We could all be wrong, but it seems just as likely that ChatGPT actually uses less energy than that it uses more. Given that we’re uncertain, and a lot of people are still making strong claims that ChatGPT is terrible for the environment, I think I’m perfectly within reason to write about how the numbers we have strongly imply that they’re wrong.

And now we're very firmly into where Masley's article becomes more fact-shaped than factual, more opinion than objective truth. That's fine, insofar as it goes, but he makes an implicit appeal to the authority that would come along with an objective analysis. In particular, if we're worried about whether or not something is, as Masley puts it, "terrible for the environment," then his own stated philosophies of effective altruism and longtermism as stated on 80,000 Hours would seem to imply that we should be cautious in the absence of objective data. At the very least, a more reasoned analysis would likely include a call for AI vendors to be much, much more transparent about the potential energy costs of their products.

I don't know how much energy a given prompt takes, and nor does Masley. That should be a point of concern, not something to blithely dismiss. Given the massive draw of modern data centers, and how much that draw has corresponded with increases in AI adoption, we at least have a rational basis to be suspicious of the rather rosy AA-battery-per-prompt figures, to say nothing of Masley's much more optimistic 0.3 Watt-hours per prompt assumption.

Recall from above that Masley claims to address that suspicion, however:

> The mistake they’re making is simple: ChatGPT and other AI chatbots are extremely, extremely small parts of AI’s energy demand. Even if everyone stopped using all AI chatbots, AI’s energy demand wouldn’t change in a noticeable way at all. The data implies that at most all chatbots are only using 1-3% of the energy used on AI.

I'll straight up say it: I have no idea where he's getting the 1 to 3% figure. My best guess is that this claim comes from the back of the envelope calculation in another of his posts. If so, that presents a problem for his analysis in that it assumes the very answer he's looking for!

https://web.archive.org/web/20250508133528/https://andymasley.substack.com/p/individual-ai-use-is-not-bad-for?open=false#%C2%A7chatgpt-and-similar-apps-are-not-the-reason-ais-total-energy-use-is-rising-so-much

More concerning, the 1% to 3% claim doesn't fit with energy usage doubling over a five-year timespan.

IEA: AI is set to drive surging electricity demand...

Here again, the secrecy of the AI industry makes it difficult to reach firm and objective conclusions. What we do know is that the companies who make and sell AI products have also been demanding startling amounts of energy, and claim that they need to do so to cover AI usage.

What Should We Do About It?

On the basis of his assumptions about the energy required for ChatGPT prompts and the volume of data center usage corresponding to chatbot prompting, Masley offers a very strong conclusion right from start:

> By being vegan, I have as much climate impact as not prompting ChatGPT 400,000 times each year (the water impact is even bigger). I don’t think I’m going to come close to prompting ChatGPT 400,000 times in my life, so each year I effectively stop more than a person’s entire lifetime of ChatGPT searches with a single lifestyle change. If I choose not to take a flight to Europe, I save 3,500,000 ChatGPT searches. this is like stopping more than 7 people from searching ChatGPT for their entire lives. Preventing ChatGPT searches is a hopelessly useless lever for the climate movement to try to pull. We have so many tools at our disposal to make the climate better. Why make everyone feel guilt over something that won’t have any impact?

This is, to be blunt, patent bullshit. The question that Masley sets out to answer is emphatically not whether or not to eat meat, but whether or not using ChatGPT et al. are likely to present a risk to the climate emergency. The existence and scope of other climate risks is absolutely immaterial to that question. Masley's insistence on making these kinds of distracting comparisons grows more absurd as his article grows in length:

> If everyone in the world stopped using ChatGPT, this would save around 3GWh per day. If everyone in the world who owns a microwave committed to using their microwaves for 10 fewer seconds every day, this would also save around 3GWh per day.

Is Masley taking action to encourage people to turn their microwaves on for 10 seconds less each day? He's absolutely arguing that people should use ChatGPT, even if only in the indirect and implicit form of downplaying the climate risks that doing so presents. If that doesn't also come with encouraging people to use the microwave less, then by his own argument, he's contributed to the waste of 3 gigawatt hours per day (125 megawatts, or about the capacity of a coal power plant).

Put this way, the absurdity becomes clear: these are incomparable activities, and whether or not Masley argues for more efficient microwave usage has no bearing on the efficiency and costs of ChatGPT. Here again, Masley seems to preemptively acknowledge the criticism, that committing to category errors in offering comparisons does not lead to any particular insight into how or if we should use ChatGPT:

> “Whataboutism” is a bad rhetorical trick where instead of responding directly to an accusation or criticism, you just launch a different accusation or criticism at someone else to deflect. Kids do this a lot.

Rather than actually understanding why "whataboutism", as he puts it, is bad, Masley simply redefines away his attempts at slight of hand:

> Under this revised definition, it’s whataboutism to say “eating meat isn’t bad because people drive,” but it’s not whataboutism to say “Google isn’t bad because its emissions are so drastically low compared to everything else we do,” and it’s not whataboutism to say the same about ChatGPT.

That is, if we take his premise, comparisons between different modes of energy consumption are always relevant, even making such comparisons offers no actionable insight.

Looking at his claims again in this light, let's revisit his list of comparisons for other activities that use approximately 3 Watt-hours worth of energy:

> - Leave a single incandescent light bulb on for 3 minutes. > - Leave a wireless router on for 30 minutes. > - Play a gaming console for 1 minute. > - Run a vacuum cleaner for 10 seconds. > - Run a microwave for 10 seconds > - Run a toaster for 8 seconds > - Brew coffee for 10 seconds

There's a massive difference between each of these points of comparison and a ChatGPT prompt. I like having light in my house, it's pretty useful. Having access to the internet is pretty wonderful, too. Playing video games is actually quite fun, and having a clean house is quite nice. It's great to have coffee and hot food on demand.

By contrast, ChatGPT does.... what? Masley is at our rescue with an answer!

> ChatGPT could write this post using less energy than it takes you to read it.

Ah. It could generate more disingenuous nonsense for the rest of us to sort through. Wonderful.

The Dismal View

Look at Masley's comparison again, and it becomes clear that it's even more dismal than it initially seems.

So fucking what if ChatGPT could write a blog post using less energy than a human? You don't give humans energy in the form of light, food, water, and heat so that they produce blog posts hyping your favorite tech nonsense, you give humans the energy to live because it's good to provide for fellow humans! Masley's throwaway joke there belies a much more bleak and dismal view still, that it is good and appropriate to view humans themselves as climate costs. Not our habits, our technologies, our cultures, but our mere existances. That kind of thinking is, without hyperbole, exactly in line with the kinds of Malthusian views of population dynamics that have motivated eugenics and fascist movements for a century.

> We will suppose the means of subsistence in any country just equal to the easy support of its inhabitants. The constant effort towards population...increases the number of people before the means of subsistence are increased. The food therefore which before supported seven millions, must now be divided among seven millions and a half or eight millions.

Wikipedia: Malthusianism

Masley's invocation of eugenicist thought here is far from unique, however.

> Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts.

Tomlinson et al. 2024: The carbon emissions of writing and illustrating are lower for AI than for humans

What a dismal, dreary view of art, culture, and human joy! I do not write to produce capitalist gain, I write because I am moved to, because it is one of the most human things in the world to tell stories— it's right up there with feeding, fighting, and fucking as far as imperatives go. As awful as that mindset is, though, it is deeply interwoven into the strange alliance between eugenicist movements, AI evangelism, longtermism, and effective altruism, as Our Opinions Are Correct called out elsewhere in episode 125. It's worth listening to the episode in its entirety, but if you'll forgive a longer excerpt, Annalee Newitz and Charlie Jane Anders deal with that view as epoused by Nick Bolstrom, author of Superintelligence:

> Annalee: So, keeping that in mind. Charlie Jane, I'm gonna have you read this quote from Superintelligence about how we will deal with super intelligent AI workers. > > Charlie Jane: Okay. “A salient initial question is whether these working machine minds are owned as capital (slaves) or are hired as free wage laborers. On closer inspection, however, it becomes doubtful that anything really hinges on this issue. There are two reasons for this. First, if a free worker in a Malthusian state gets paid a subsistence level wage, he will have no disposable income left over after he has paid for food and other necessities. If the worker is instead a slave, his owner will pay for his maintenance, and again, he will have no disposable income. In either case, the worker gets the necessities and nothing more. > > “Second, suppose that the free laborer were somehow in a position to command an above subsistence level income, perhaps because of favorable regulation. How will he spend the surplus? Investors would find it most profitable to create workers who would be “voluntary slaves” who would willingly work for subsistence level wages. > > “Investors may create such workers by copying those workers who are compliant. With appropriate selection and perhaps some modification to the code, investors might be able to create workers who not only prefer to volunteer their labor, but would also choose to donate back to their owners any surplus income they might happen to receive. > > “Giving money to the worker would then be, but a roundabout way of giving money to the owner or employer, even if the worker were a free agent with full legal rights.” > > Oh gosh, that is so dystopian. First of all, the notion that the only difference between a slave and a free work worker is how much resources you receive. Like if you're a free worker, you might get something above your subsistence needs. There's nothing about the actual nature of slavery, which is that you can't change jobs and you can't have a life or determine your own destiny. > > That's incredibly dark and weird in the notion that like, well, workers might be able to get paid more than subsistence level, but then we just turn around and make workers who are happy to work for free. > > And I'm like, that is some real rhetorical slippage. Like he's just like, oh, but then blah. Then we're just gonna magically create slaves anyway. And it's just, I don't even understand this paragraph. I've read it like three or four times and it just baffles me more and more each time.

Wrapping Up

Masley's post was shared fairly widely, and by some quite influential folks in tech, such that I think it was worth a few thousand words to critically examine his central claim: that we should not refuse to use ChatGPT and similar AI products on an environmental basis alone. Trying to understand that claim necessitates a bit of a detour into what AI is, how it intersects with eugenicist and fascist movements in tech, and about the climate emergency in general. Masley's central claim, however, rests entirely on guesswork (however reasonable!) and, to borrow his own term, "whataboutism."

His argument, then, is much better served by advocating for AI vendors to provide the transparency needed to actually evaluate the climate impact of LLM usage. I would personally be far more sympathetic to his post had he taken that tack, rather than downplaying the very real concerns raised with LLM products.

As it stands, though, I submit that Masley's arguments are best understood as fact-shaped apologia rather than a serious contribution to discourse. Perhaps even more unfortunately, this dismissal comes along with some truly depressing statements about the human condition; statements that I find downright appalling as someone who spends much of their day learning how to write and communicate with fellow humans.

In light of all the above, I'm happy to shut the proverbial book on this post, and be content continuing to oppose the proliferation of AI products.

Support me!

This is a longer post than my normal, by a fair bit, and took me several hours to write and research. If that is valuable to you, please consider supporting that labor. Thank you so much!

https://ko-fi.com/xgranade

Read the whole story
crazy4pi314
52 days ago
reply
Share this story
Delete

Do Not Obey In Advance

2 Shares

Speaking of Timothy Snyder, Literary Hub published the first chapter (the one on not obeying in advance) of his 2017 book On Tyranny. It begins:

Do not obey in advance.

Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.

Anticipatory obedience is a political tragedy. Perhaps rulers did not initially know that citizens were willing to compromise this value or that principle. Perhaps a new regime did not at first have the direct means of influencing citizens one way or another. After the German elections of 1932, which permitted Adolf Hitler to form a government, or the Czechoslovak elections of 1946, where communists were victorious, the next crucial step was anticipatory obedience. Because enough people in both cases voluntarily extended their services to the new leaders, Nazis and communists alike realized that they could move quickly toward a full regime change. The first heedless acts of conformity could not then be reversed.

It’s also worth reading the original list posted by Snyder in November 2016 that became the basis of On Tyranny: Fighting Authoritarianism: 20 Lessons from the 20th Century.

10. Practice corporeal politics. Power wants your body softening in your chair and your emotions dissipating on the screen. Get outside. Put your body in unfamiliar places with unfamiliar people. Make new friends and march with them.

11. Make eye contact and small talk. This is not just polite. It is a way to stay in touch with your surroundings, break down unnecessary social barriers, and come to understand whom you should and should not trust. If we enter a culture of denunciation, you will want to know the psychological landscape of your daily life.

12. Take responsibility for the face of the world. Notice the swastikas and the other signs of hate. Do not look away and do not get used to them. Remove them yourself and set an example for others to do so.

13. Hinder the one-party state. The parties that took over states were once something else. They exploited a historical moment to make political life impossible for their rivals. Vote in local and state elections while you can.

Tags: books · On Tyranny · politics · Timothy Snyder · USA

Read the whole story
crazy4pi314
146 days ago
reply
Share this story
Delete

How Trans People Can Survive, and Thrive: A Playbook to Meet Trump’s ‘Shock and Awe’

1 Share
 

Veronica Esposito offers her perspective as a therapist on how to stay safe and healthy during this time of crisis.

  

by Veronica Esposito

Transgender people have been hit by a wave of attacks in the three weeks since President Trump’s inauguration, mostly via a series of unlawful and discriminatory executive orders. This is part of a larger strategy — “shock and awe” — that’s intended to create chaos and sow despair in large parts of the nation that are not on board with Trump’s radical agenda to re-shape the federal government and obliterate accepted facts about law, governance and the fabric of our nation.

I’m a licensed marriage and family therapist, and I’m also transgender. I specialize in serving my community, and right now I am seeing our collective fears firsthand. I am doing my best to help. I’m also doing what I can to keep myself in good shape so that I can continue to perform my job and survive the Trump years. It’s not easy, but I’ve learned a few things that I want to share.

  • Find ways to experience community and joy. The whole point behind Trump’s shock and awe strategy is that he and his advisors know they don’t have the authority to do what they proclaim, so they are making a big show in hopes that we’ll collectively relinquish power. It’s like when a pufferfish self-inflates to scare its enemies. We resist by finding ways to experience community and joy, and to keep building a better future in spite of his hatred.

  • Make plans. Do what you can right now to be safe. Do you think you need to leave the country? OK then, start researching where you can easily get a travel visa, start setting aside money and a go-bag, and plan your departure. Feeling afraid? That makes sense; start looking for support groups, reaching out to friends, and consider mindfulness meditation. Hate Trump's discriminatory actions? Call your representatives, find protests and other actions in which you can take part. Taking some steps, no matter how small, is a powerful way to counter anxiety.

  • Give yourself reality breaks. Yes, it’s very important to know what’s happening and to be engaged in healthy ways. It’s also important not to stay glued to Bluesky 24 hours a day—there’s a difference between catching up on the latest and endlessly scrolling for that one “last” hit of news. Breaks can be a lot of different things: you can get out into nature and forget everything for a few hours, go to a favorite cafe where everything feels normal, listen to a fun podcast while you cook a meal for someone you care about, or just disappear into a movie with a familiar, safe world. Just getting out of your house and taking a walk around the block can do wonders.

  • Don’t future-trip. I can’t promise anyone that there won’t be a federal ban on our medical care. I can’t promise the GOP won’t pass a law legally defining us out of existence. These things and worse may happen, but it doesn’t help to catastrophize right now. In times of crisis it’s important to remain grounded and to not think too many steps ahead about what terrible things may occur. Try to focus on what is within your control. Focus on what you can do now to help yourself and your community; trust that should the worst come to pass, you’ll be prepared.

  • Write out your fears. Our fears have a funny way of not looking quite so bad once they’re down on paper. Try making a list of the things that scare you most. Then start thinking about what your red lines are — things that tell you, “OK, this scary event is getting a little too close for me.” Note those and consider what you’ll do to stay safe. Try putting out your fears in a journal or discussing them with others.

  • Have endless self-compassion. At times of crisis it’s easy to feel like you need to be doing more, or should be putting everyone before yourself. None of that is true. Whatever you can do right now is valid and valuable, and this is definitely a time to make sure you’re paying close attention to your own mental health. Try not to be judgmental toward yourself. If you have a strong self-critical voice, it may be well-meaning but is unhelpful in protecting yourself and others. Self-kindness is a much better way.

  • Have something to look forward to. We all need something good to be working toward. For me it’s training to summit a mountain in the summer. Whatever it is for you—be it planning a weekend get-away, taking a community college class, leveling up in your career, making a beautiful quilt, reading all the books by a cherished author—having a goal to work toward and celebrate will provide a sense of agency and a healthy way to connect with a better future. 

  • It’s OK to not feel OK. The nation that we thought we lived in is being torn up before our eyes, and the most powerful politician on earth is attacking us. We are absolutely going to be feeling things: grief for the world we’re losing, fear for ourselves and loved ones, anger at the institutions that have failed us, anxiety at what the coming months hold. Making space for these emotions can mean a lot of things—self-compassion, talking with a friend, calling a crisis line, using guided meditations designed to help with emotional release, journaling, using art and other creative methods, or reading a book about the trans rights struggle. The important thing is to know that these feelings will come, and they are completely appropriate.

  • This will eventually end. It’s important to remember that shock and awe is not a strategy that can be maintained. These days of 10 new crises every hour will eventually stop. And a lot of Trump’s discriminatory actions will be met with resistance in the courts, by mass demonstrations and by protections in Democratic states. (These things are, in fact, already happening.) While some of Trump’s hateful attacks may stick, it’s important to be aware that we are fighting back, and he will not win on all fronts. It’s also important to remember that, ultimately, we have each other. The trans community has survived decades of state-sanctioned repression. Many of us managed to transition and thrive before any politician gave us the right to gender-affirming care — if necessary, we can do that again.

If you’re feeling shaken up by recent events, you’re in good company. In fact, I’d go one further—if you’re not shaken up, I’d be concerned about you. The fact is, trans people everywhere are registering fears about things like passports, access to lifesaving medical care, hate crimes and bathroom safety. Even fears that seemed far-out suddenly don’t seem quite so outlandish.

But a lot of making it through the Trump administration is to recognize this as a battle in a longer struggle our community has waged over our identities and medical treatment. In this struggle there will be gains and there will be setbacks. After a lot of gains during the period roughly covering the mid-1990s through 2020, we are experiencing setbacks. It will be ugly for a while, but I do fully believe we can and will get through this, and that the longer march toward our liberation will continue. 

We will collectively weather the storm and do what we always have — living our truth, showing up for one another, and building a world that we can live in.


Veronica Esposito (she/her) is a writer and therapist based in the Bay Area. She writes regularly for The Guardian, Xtra Magazine, and KQED, the NPR member station for Northern California, on the arts, mental health, and LGBTQ+ issues.

 

Read the whole story
crazy4pi314
151 days ago
reply
Share this story
Delete

10 best smut comics, ranked

1 Share
10 Best Smut Comics Cover Image

Smut has been around for centuries, offering a way for humans to safely explore identity, sexuality, and sexual preference by engaging in reading fiction. These works weave together pleasure, desire, and artistry with their suggestive, sometimes downright graphic images of beings engaging in carnal acts that hold a unique ability to tantalize the reader.

I tried to find smut comics that will hit on a range of kinks. So find a private location, light a candle, and see what flavor of erotica suits you the most by checking out our list of the 10 best smut comics, ranked.

Read the whole story
crazy4pi314
162 days ago
reply
Share this story
Delete

Bartosz Ciechanowski’s Interactive Moon Article

1 Share

The interactive animations are fantastic in this arcticle on the Moon from Bartosz Ciechanowski.

It’s the perfect example of how these animations can truly be used as teaching tools – it’s like learning through play.

In the vastness of empty space surrounding Earth, the Moon is our closest celestial neighbor. Its face, periodically filled with light and devoured by darkness, has an ever-changing, but dependable presence in our skies.

In this article, we’ll learn about the Moon and its path around our planet, but to experience that journey first-hand, we have to enter the cosmos itself.

Read the full article here and check out Bartosz Ciechanowski on Github here.

Read the whole story
crazy4pi314
192 days ago
reply
Share this story
Delete

Why the Work Still Matters

1 Share

In the days following Donald Trump’s presidential victory, we have seen a larger-than-normal number of people canceling their subscriptions to 404 Media. Alongside these cancellations, many people have explained that they are canceling not because they do not like our articles but because they feel a general sense of depression, that nothing matters, or that they can no longer bear reading the news. 

There were many media outlets who wrote notes to their readers in the immediate aftermath of Trump’s shocking victory in 2016. Some of these notes said that their publications would position themselves as a resistance force against Trump, or made grand, sweeping pronouncements about what their work would be able to do. We cannot and will not make false promises to you about the power of journalism—especially at a small publication—to stop the country’s knowing march into authoritarianism. But we can explain our approach to the work and we can demonstrate to you why it matters. 

In 2016, the four of us were at Motherboard, doing work that is very similar to the work that we’re still doing today. We have covered technology through one Trump term and intend to continue covering technology through the second Trump term. What we found to be true during Trump’s first term, remained true during Joe Biden’s presidency, and will remain true as long as we do this: we cannot set the bar for our success at the systemic saving of democracy. Instead, we have found that our work can and does make incremental positive change at the local, state, and federal level, and that, over time, these small improvements become increasingly important. 

The way that we have always done this and the way we will continue to do this is by fearlessly reporting on the ways technology and the powerful people and companies who own these technologies wield it against normal people, but especially against society’s most vulnerable people. Over the years, we have been called anti-technology or too cynical. But we are not anti-technology. We want technology that benefits people, and in order to do that we also have to expose how it can invade people’s privacy, surveil them, steal their work, steal their bodily autonomy and harass them, destroy any sense of a shared reality, value robotic plagiarism over human creativity, and undermine workers. We will hold companies, people, and politicians who accelerate towards this future to account. But there is another side to this coin. We have, and will continue to champion and amplify people, groups, movements, and ideas that use technology to make our lives better, are fighting back against anti-human uses of technology, and serve to challenge, decentralize, or redistribute power from concentrated big tech companies to the masses.

We’ve called this perspective, which we hope shines through in most of our work, two things over the years: “Tech populism,” and “local reporting from the internet.” These are very similar but slightly different things. It is not—or should not be—a radical idea to report stories with the core assumption that technology should make life better for the people who use it and for society as a whole. And it should not be radical to believe that the immense amount of wealth and so-called progress being created from technological progress should be spread evenly and thoughtfully among its users, not tech CEOs and an oligarch class. That “progress” so far has instead brought us more intensive surveillance capitalism, the widespread theft of artists’ and writers’ work, the ransacking of natural resources, and the subjugation of workers in the United States and around the world. Our work is populist in that we recognize that many of the problems plaguing the United States today and which are factors that have laid the groundwork for Trump’s return—income inequality, a lack of affordable housing, unstable work, the widespread inability to tell what is real and what is fake—are being at least partially driven by technology and/or the immense wealth of the people who own tech companies.

“Local reporting from the internet,” meanwhile, means telling stories from the perspective of users and often lower-level tech employees, not by begging company communications professionals for access to executives or exclusive new features. Most of our articles tell the stories of hyperspecific communities of people who are using technology or are impacted by it in some way. By focusing on how technology impacts people, we have found that we can impact technology and make the world slightly better, regardless of who the president is.

This reporting strategy worked in Trump’s first term and it will be even more salient in a second term in which he has sought and created an even closer relationship with big tech CEOs. Trump has formed an alliance with Elon Musk and many of Silicon Valley’s worst people, the richest and most powerful of whom actively helped him get elected or immediately kissed the ring after he won the election–including Jeff Bezos, who demanded the newspaper he owns kill its planned endorsement of Kamala Harris and then immediately congratulated Trump on his “extraordinary political comeback and decisive victory.” Trump and especially, Musk, have campaigned on the false idea that mass deregulation and corporatism will somehow help normal people rather than further immiserate them. 

All of this may sound vague or like empty platitudes. So, let’s make this concrete. 

In Trump’s first term we saw the widespread purging of government science and climate data. We also saw nonprofits, decentralized communities, and random people on the internet form collaborative efforts to successfully archive and share this data. We saw government workers risk their jobs and their freedom to leak critical information about purges happening within their agencies, and expect to see the same in Trump’s second term. We filed hundreds of Freedom of Information Act requests with federal agencies—which may be hamstrung in Trump’s second term—but we also filed hundreds of public records requests with state and local agencies that uncovered the creation and spread of surveillance systems, revealed that Apple was ordering recyclers to shred perfectly good iPhones MacBooks into zillions of pieces and showed Utah was contracting with a company turning the state into a surveillance panopticon (the CEO of that company was later fired, lost contracts, and had to rebrand). Under Trump, we reported on the widespread sale of cell phone location data to data brokers and bounty hunters, which led companies to stop the practice and ultimately led to multi-hundred million dollar fines from the FCC. Under Trump, we saw tech companies monopolize repair but the beginnings of the right to repair movement, the end of “Warranty Void if Removed” stickers, and the first pieces of legislation that would ultimately become fair repair laws passed over the last few years. Under Trump, we saw the end of net neutrality but our reporting helped kill big telecom lobbying campaigns and accelerate the rise of independent locally owned government ISPs that are faster, more reliable, and cheaper than the likes of Comcast and Cox. Under Trump, we reported on the use of Predator drones to surveil Black Lives Matter protesters and, because of our reporting, we saw Senators fight back against this practice. Under Trump, we saw and reported on the rise of the first workers unions in the tech industry, broad protest against the gig economy and algorithmic bosses, and worker rebellions at Amazon, Google, Amazon, Facebook, video game companies, and other major tech companies.    

In the early days of Trump’s first term, we reported on the ways the average person (and even hackers) found their own ways to protest, how scientists reacted and fought back against the threat of a science-denier administration, and how Trump’s team approached transparency online–including the efforts of archivists to preserve digital history

We started reporting on the impending fallout of FOSTA/SESTA long before Trump signed it into law in 2018: Sex workers told us that instead of saving any sex trafficking victims as part of its stated purpose, it would put more people at risk of exploitation. We listened, we reported on those worries and fears, and when they came true, we kept reporting on it. When platforms and site sections shuttered out of fear of legal retaliation from the Trump administration’s war on porn, we talked to everyone from site operators, users, sex workers, and hosting providers to try to understand how they were affected. Through the years we have covered the ways bodily autonomy, educational institutions, and marginalized people are threatened by leaders that align with and promote extremist ideology. We have covered the ways that the data broker industry specifically allows for the targeting of women seeking abortions and sells data to the military about Muslims—and has led to both corporate and government action that have made doing this type of surveillance more difficult. 

At 404 Media, we’re continuing this work, as we’ve promised to do from the beginning. We’ve covered how the incoming vice president spurred hate in a small town. How AI boosters helped cheerlead Trump back into office. And how advertising, funded by tech billionaires, micro-targeted and lied to voters on the biggest social media platforms in the world, using divisive rhetoric. Not every story we do leads directly to positive impact, but many of them do. Our work has led to new moderation policies that make it more difficult to make nonconsensual AI porn (and child abuse imagery), a lawsuit against Nvidia for building AI models on the back of other peoples’ labor and creative work; fixes in the New York subway system to preserve privacy; Google kicking a company that claimed to be targeting adverts based on what people said near their smart devices from its platform (and Google booting a global surveillance tool from its ecosystem too); YouTube removing 1,000 videos that were involved in an elaborate Medicare scam; Amazon taking down dangerous AI-generated misinformation; and shutdown a tool that used for harassment that was scraping Discord en masse. We’ve reported on what surveillance technology U.S. government agencies have purchased, and will continue this work as the Trump administration carries out its explicit plans for mass deportations.

We’re faced again with an administration made up of people who explicitly want to ban porn, restrict women’s healthcare, demolish reproductive rights and make it harder still to access sexual education. We followed these stories in Trump’s first term, and in Biden’s, too—and we have no intention to slow down or stopping now that Trump’s headed back to the White House. When Trump won in 2016, we weren’t sure if our work mattered. Now we are sure that it does.



Read the whole story
crazy4pi314
242 days ago
reply
Share this story
Delete
Next Page of Stories