updating my priors
2612 stories
·
3 followers

Funding Open Source?

1 Share
$BLEEBZORX chart
Most of the world's software infrastructure is, or is based upon, open source. The developers and supporters of some of it, for example the Linux kernel, and the major compilers, are paid by technology companies because they are critical to their business. Other, less visible but similarly critical parts are supported by lone volunteers. Apart from the unfairness, this can lead to serious vulnerabilities. Back in 2018 I wrote about one such vulnerability, the event-stream hack, in Securing The Software Supply Chain
The attackers targeted a widely-used, fairly old package that was still being maintained by the original author, a volunteer. They offered to take over what had become a burdensome task, and the offer was accepted. Now, despite the fact that the attacker was just an e-mail address, they were the official maintainer of the package and could authorize changes.
The change they authorized included code to steal cryptocurrencies.

In 2020 I wrote a detailed post about this problem entitled Supporting Open Source Software. Recently the topic re-surfaced on an e-mail alias I read. But what triggered the post below the fold was that this coincided with yet another fascinating piece from Matt Levine and his laugh-out-loud follow-up the next day.

In Supporting Open Source Software I discussed Cameron Neylon's 2017 paper on a related problem, Sustaining Scholarly Infrastructures through Collective Action: The Lessons that Olson can Teach us:
Neylon starts by identifying the three possible models for the sustainability of scholarly infrastructures:
Infrastructures for data, such as repositories, curation systems, aggregators, indexes and standards are public goods. This means that finding sustainable economic models to support them is a challenge. This is due to free-loading, where someone who does not contribute to the support of the infrastructure nonetheless gains the benefit of it. The work of Mancur Olson (1965) suggests there are only three ways to address this for large groups: compulsion (often as some form of taxation) to support the infrastructure; the provision of non-collective (club) goods to those who contribute; or mechanisms that change the effective number of participants in the negotiation.
In other words, the choices for sustainability are "taxation, byproduct, oligopoly".
"Taxation" in this context means some mechanism for compelling some or all users to pay. I summarized these choices thus:
  • Taxation conflicts with the "free as in beer, free as in speech" ethos of open source.
  • Byproduct is, in effect, the "Red Hat" model of free software with paid support. Red Hat, the second place contributor to the Linux kernel and worth $34B when acquired by IBM last year. Others using this model may not have been quite as successful, but many have managed to survive (the LOCKSS program runs this way) and some to flourish (e.g. Canonical).
  • Oligopoly is what happens in practice. Take, for example, the Linux Foundation, which is:
    supported by members such as AT&T, Cisco, Fujitsu, Google, Hitachi, Huawei, IBM, Intel, Microsoft, NEC, Oracle, Orange S.A., Qualcomm, Samsung, Tencent, and VMware, as well as developers from around the world
    It is pretty clear the the corporate members, and especially the big contributors like Intel, have more influence than the "developers from around the world".
On the e-mail list my friend Chuck McManis argued for taxation, writing:
Rather than "free" software, you need "community" software. The users of that software are taxed proportionately to their use and the taxes are used to fund maintenance. Using a progressive tax like income tax you can adjust the cost burden from 'free' for people who are not creating value with it, they are just using it. To 'high' for people whose entire enterprise wouldn't exist if they didn't have access to it. That means and enforceable requirement to pay, and an IP protection structure that prevents theft by simple translation.

It is unfortunate that many technologically oriented people are not thinking more deeply about macro economics as solutions to this problem where "open source" used to be "roads" or "sewers" or "electricity wires" or "ship harbors" were things used by everyone but needed to be paid for and maintained.
Source
My response was that Chuck's examples taught both positive and negative lessons:
The thing that we know from long experience with the mechanisms for funding physical infrastructure like these examples is that over time they work less and less well. One only has to drive on California roads to know this is true. It is why most of the bridges in the US and elsewhere are life-expired (see Fern Hollow Bridge).

Thanks to inflation and feature creep the cost of maintenance increases faster than politics can increase the funding for it.
But the very next day Matt Levine's "Money Stuff" Bloomberg column Memecoin Venture Capital described a funding mechanism that Mancur Olson hadn't considered:
You launch a project, and it has a name, The Bleebzorx Network or whatever, and it has some business plan, and if the plan works it will make money. And then you go out to investors and you say “buy some Bleebzorx Tokens if you believe in my project,” and the investors are like “oh you are a smart founder and your project sounds good, we are interested, what do we get if we buy Bleebzorx Tokens,” and you say “well you get Bleebzorx Tokens.” And the investors, if they come from traditional financial backgrounds, say “no we know we get that, but like, what economic rights do these tokens have? What is their connection to the underlying project?” And you say “oh, nothing, they are just tokens. They just have the same name as the project.” And they say “well will you share your profits from the project with the token holders,” and you say “lol absolutely not.” And they say “well then why would we pay for these tokens? Even if the project succeeds beyond our wildest expectations, why would that make the tokens worth money?” And you say: “It just will. People will be like ‘oh, the Bleebzorx Project is good, we’d better buy some Bleebzorx Tokens.’ So they’ll buy tokens and the price will go up. They won’t overthink it, so neither should you. Just buy the tokens and you’ll make money.” Loosely speaking, these tokens are called “meme tokens,” or “memecoins”: They have some memetic association with your project, but no economic rights.
The context for Levin's discoveryof this innovative funding mechanism was that:
On Jan. 1, Steve Yegge, a famous software developer and writer, announced a project called Gas Town, which you might approximately describe as “an IDE for vibe-coding.” People seem to like it. Yegge did not raise a bunch of money to build Gas Town; he built it himself, apparently for fun. His plans to monetize it were, as far as I can tell, quite vague, in that optimistic open-source-y “if you build something cool the money will work itself out” sort of way. (“I’ve already started to get strange offers, from people sniffing around early rumors of Gas Town, to pay me to sit at home and be myself,” he wrote, though also: “I shared Gas Town with Anthropic in November.” If you build something cool in artificial intelligence these days, the money really does work itself out.)
The way it did " work itself out" was fascinating. Yegge got a LinkedIn messagea:
The LinkedIn message said that someone had set up a token “for” Gas Town on a crypto platform named, delightfully, Bags. The way Bags works is that anyone can set up a memecoin, and then maybe people will trade it, and if they trade it the platform will collect fees, and whoever sets up the memecoin can collect those trading fees, or direct them to whomever she wants to get them. So someone set up a Gas Town token — “$GAS” — on Bags, and directed the fees to Yegge. Millions of dollars’ worth of $GAS traded, for some reason, generating tens of thousands of dollars of fees waiting for Yegge.
Yegge is a good writer and his account in BAGS and the Creator Economy is worth reading in full.

The next day Levine returned to the story:
In describing this mechanism of financing through memecoins, I made a joke about “The Bleebzorx Network,” so of course someone launched that on Bags. The royalties (i.e. trading fees), apparently, are directed to my X account. I don’t know what that means, exactly; I assume that it means that I can collect the royalties and no one else can. I have not collected the royalties, I do not know how to collect the royalties, and I would not collect the royalties even if I could. (As of noon today they were about $9,500.)
I think "OK, that's a good joke". But then I read on for the full ridiculous situation:
  1. I have no involvement in this and do not endorse it. I suppose I should have anticipated that something like this would happen, but I honestly didn’t.
  2. While I do not give investing advice, this is obviously dumb and I personally would not, and will not, buy any $BLEEBZORX. In fact, I will go so far as to say that, if you do buy it, you will definitely lose 100% of the money that you put into $BLEEBZORX, and also your self-respect and the respect of your loved ones. “Ooh I bought Bleebzorx tokens,” listen to yourself.
  3. I am not going to collect the royalties that are supposedly accruing in my name.
  4. That said, I do understand that memecoins run on attention and that, by writing about this, I might increase its price and volume and thus the royalties. (“Allow me to get richer just by telling you about it,” Yegge wrote.) I also recognize that many of the people who emailed me to tell me about it probably own $BLEEBZORX coins and were hoping that I would write about it so the price would go up. I am writing about it for journalistic and amusement purposes, not to pump it, but I recognize that in doing so I might be pumping it.
  5. I am trying not to! I really truly do not want you to go around trading $BLEEBZORX for speculation or to generate royalties for me, for a variety of reasons, including (1) I will not collect the royalties so you’re not doing me any favors, (2) the thing above about people ruining their lives by being associated with memecoins and (3) I like to think that this column is a classy establishment and I would be very embarrassed if my readers were going around falling for dumb memecoin pumps.
  6. Because memecoins thrive on attention, I am not going to write about this again. I will pay no more attention to $BLEEBZORX, so you should not buy it to bet on my continued attention.
Levine understands that there is a darker side to this:
But, for another thing, Yegge eventually posted about it. Not — apparently — because he had anything to do with its creation, or because it has anything to do with Gas Town. But because someone sent him a LinkedIn message about it, and the LinkedIn message promised him money, and the money was there. So he posted about it, knowing that doing so would drive attention to $GAS, which would drive more trading of $GAS, which would make him more money. “Allow me to get richer just by telling you about it,” he wrote, correctly. (As of about noon today it had generated more than $290,000 of earnings for him.) That’s how memecoins work! You own them, you tell people about them, you get richer.
So Yegge got $290K richer by pumping $GAS. But Levine asks the real question:
Why did someone do this? Why did someone create $GAS, and why did they (or someone else) message Yegge about it? Why did the creator allocate 99% of the trading fees to Yegge, rather than keeping them for herself? Presumably the creator gave Yegge the trading fees to (1) make it seem more legitimately connected to Gas Town and (2) entice Yegge to post about it. And presumably the creator kept, not the royalties, but a lot of $GAS coins for herself. You set up the coin, you distribute some, you keep a lot yourself, you generate some royalties, you send it to Yegge, you get him to post, the coin goes up and you sell at a profit. The price of $GAS spiked from less than $0.01 to more than $0.04 when Yegge posted about it, peaking at a market value of about $40 million. (Then it collapsed again and now it’s back below $0.001, which is the normal fate of a memecoin.)
Yegge and the promoters of $GAS got richer through a classic cryptocurrency rug pull. David Gerard is all over this in Steve Yegge’s Gas Town: Vibe coding goes crypto scam. He starts with the background:
Steve Yegge is a renowned software developer. He’s done this for thirty-odd years. Senior engineer at Amazon then Google, blogger on the art of programming. Yegge was highly regarded.

Then he got his first hit of vibe code.

In March 2025, Yegge ran some old game code of his full of bugs and sections saying “TO DO” through an AI coding bot — and it fixed some of them! Steve was one-shotted.

The decline was sudden and incurable. He even cowrote a book with Gene Kim called Vibe Coding. Well, I say “wrote” — they used a chatbot for “draft generation and draft ranking”. They vibed the book text.

Yegge and Kim also worked on the DORA report on vibe coding. That’s the one that took people’s self-reported feelings about AI coding and put the vibes on graphs. Complete with error bars. Vibe statistics.

In the book intro, Yegge straight-up says:
It’s like playing a slot machine with infinite payout but also infinite loss potential.

… I’m completely addicted to this new way of coding, and I’m having the time of my life.
Generative AI is all about generating an addiction, as I've been pointing out for many months. And addictions cause irresponsible behavior:
Gas Town is a vibe coder orchestration tool. You get a whole bunch of Claude instances and you just set them to work on your verbal specification. Yegge’s described it as “Kubernetes for agents.” I’d say Kubernetes for sorcerer’s apprentices.

Gas Town is a machine for spending hundreds of dollars a day on Claude Code. All the money you’ve got? Gas Town wants it:
Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from.
Yegge’s an extremely experienced professional engineer. So he put care into Gas Town, right?
I’ve never seen the code, and I never care to, which might give you pause.
Of course, as Gerard points out, the code is probably riddled with vulerabilites that someone who does read the code can exploit. But the irresponsibility is also financial:
Crypto bros have been pulling this scam for years. They say “please publicise our thing about you.” Then the scammer runs away with everyone’s money.

Developers consistently tell these scoundrels: “get outa here.” But not Steve Yegge, ’cos his brain is completely vibed: [Medium]
Woah, what am I, some sorta dumbass? Well yeah, actually. So I went for it.

… When I see a community of earnest young weird-word-using investors cheering Gas Town on, well, I hope they all get filthy rich.

… I’m not endorsing buying crypto, though I am very happy that people are doing it.
I bet you are.
Source
This is the whole history of the scam:
The GAS token was released 13 January at 1pm UTC. Yegge posted about it on 15 January at 2:45am UTC.

By the morning of 16 January, the price peaked at 4 cents a GAS coin! Then the scammer started dumping the tokens and taking money from the suckers. By 7am on 19 January, the GAS token had been fully pumped and dumped.

A couple of hours before the final dump, Yegge posted to his blog: [Medium]
Gas Town itself needs my full attention … So I had to step back from the community.
That and all the money’s gone. Vibe finance.
This is the attention economy in action, so memecoins are a mechanism for funding open source projects if and only if:
  • You are a high-profile member of the community.
  • And you are comfortable living on money from degenerate gamblers.
  • And either you are happy to run pump-and-dump scams or, apparently like Yegge, are happy to front for them in return for a fraction of the take.
The bottom line is that the suckers who fell for the $GAS scam likely lost some $3M, around 90% of which ended up with the scammers and around 10% with Yegge. That's not a very efficient way to fund open source projects.

Read the whole story
jsled
1 day ago
reply
South Burlington, Vermont
Share this story
Delete

Lazy Campaign Building Checklist

1 Share

Recently I had the unique experience of running two session zeros in the same week. Preparing, running, and building campaigns off of these two session zeros helped me refine my own checklist for lazy campaign building which I offer to you today. The checklist below includes links to longer articles on related topics.

Initial Campaign Planning

Before you begin in earnest, consider your system, world, and theme and get your players on board.

  • Choose an RPG system.
  • Choose a campaign setting.
  • Conduct informal conversations with players about your choices to get buy in. See Get Players to Play Other RPGs.

Pre-Session Zero Campaign Prep

Prepare for your session zero.

  • Come up with your campaign's hook.
  • Come up with your campaign's truths.
  • Think up any worldbuilding questions you want to bring to your players like starting location details, gods, factions, and other unique things. Focus these questions on the things you actually want answers to.
  • Develop your campaign one-pager.

Run Your Session Zero

Spend a session talking to your players about the campaign you're all going to enjoy.

After Your Session Zero

Your session zero is complete, now it's time to build off of it and get to the next adventure you're going to run.

Develop the World

At this point you hopefully have enough to start putting adventures in front of the characters and running games. Now you can step back and flesh out the larger world around the characters, continuing to focus on the characters and spiraling outwards.

  • Develop three villains with their own goals and quests.
  • Develop additional factions: one or two good, shady, and bad. Tie them to villains as needed.
  • Develop or use an existing practical pantheon – gods and religions the characters worship, discover, or face off against.
  • Develop a practical history – elements the characters can discover as they explore.
  • Build a faction list you can roll on to flavor monuments, items, NPCs, locations, or encounters.
  • Use the magic system and economy of your system of choice. Keep it simple. You can always add more advanced things later if you want.

Prep and run your next game!

More Sly Flourish Stuff

Each week I record an episode of the Lazy RPG Talk Show (also available as a podcast) in which I talk about all things in tabletop RPGs.

Last Week's Lazy RPG Talk Show Topics

Here are last week's topics with time stamped links to the YouTube video.

Patreon Questions and Answers

Also on the Talk Show, I answer questions from Sly Flourish Patrons. Here are last week's questions and answers.

Talk Show Links

Here are links to the sites I referenced during the talk show.

Last week I also posted a YouTube video on The Twin Dragons – Dragon Empire Prep Session 52.

RPG Tips

Each week I think about what I learned in my last RPG session and write them up as RPG tips. Here are this week's tips:

  • Be wary of Groundhog Day adventures where characters repeat a time loop. They can get repetitive and frustrating.
  • Don’t discount simple adventure structures. An NPC wants the characters to do something at a location.
  • Seed two or three future quests while the characters finish their current one.
  • Ask players regularly what they’re enjoying about the game and what they want to see more of.
  • Write down notable characters’ abilities so you can offer situations where they can showcase them.
  • Add relics — flavorful items with a single use of a powerful spell — to your treasure rewards.
  • Keep things simple.

Related Articles

Get More from Sly Flourish

Buy Sly Flourish's Books

Have a question or want to contact me? Check out Sly Flourish's Frequently Asked Questions.

Read the whole story
jsled
1 day ago
reply
South Burlington, Vermont
Share this story
Delete

Lose Myself

1 Share

I don’t know what depression feels like for other people, but I can tell I’m headed down into the muck when my internal monologue turns against me. It’s got a handful of phrases that it repeats over and over when things start to go bad, and one favorite is “Nothing you do matters.”

I’ve been getting that one a lot lately. I know, rationally, that it’s not true. A lot of what I do matters, to my family and my friends and myself. But, you know how it is: this is my mental illness; there are many like it, but this one is mine.

Why this particular phrase at this particular time stings so much is because it’s not entirely untrue, specifically with regard to my profession. I’m a computer programmer, see, and there has been a lot going on.

AI.

Sigh.

They rhyme for a reason.

I’m not talking about the the razor-sharp edge, where people eagerly bleed, running AI-based agents that free them from the burden of responding to e-mails from their friends. (Or people who were formerly their friends, given they don’t rate an actual response.)

And I’m not talking about the churning, smoking, shambling software production stacks inspired by dystopian hellscapes. (The Mayor of Gas Town is literally named “The People Eater.” Little too on the nose there, pal.)

And I’m not talking about the grand philosophical debates from our deepest thinkers and our best minds, if computers have risen to sentience, to consciousness. (Spoiler: No. Don’t be stupid. Jesus.)

And I’m not even talking about the moral, ethical, social, environmental, or economic impact of AI, because nobody else is either. Boooring.

What I am talking about is being replaced, about becoming expendable, about machines gaining the ability to adequately perform a very specific function that was previously the exclusive domain of skull meat.

What I’m talking about is that nothing I do matters. That nothing I can do matters.

In just the past few months, what was wild-eyed science fiction is now workaday reality. I’ve been dubious about the prospects of LLMs creating code (and lots and lots of other things) for as long as they’ve existed, but it’s hard to argue with the latest wave and their abilities from a purely practical, purely capitalistic, purely ship-something-anything perspective — the perspective that pays the bills. I’ve seen self-professed non-technical people bring functioning code into being, and that bests a significant number of actual humans I’ve worked with.

The legend has John Henry — the very best in the world — winning his battle against a machine, only to lose the war by, y’know, dying. And I sure as hell ain’t no John Henry. How many steel-drivin’ men take one look at their new opponent and just walk away? How many are making the right decision by doing so?

There are a thousand factors at play here (most of which are still in motion) but for plenty of small-scale, snap-together projects, something like Anthropic’s Claude Code or OpenAI’s Codex will be good enough, for economically-viable values of both “good” and “enough.” They’ll either burp up scripts that simply wouldn’t exist otherwise, or do (some of) the work of (some) junior or mid-level coders (somewhat) faster and cheaper. But the direction things are headed seems pretty clear.

Is the code any good? I don’t know. Who cares? Nobody looks at it anyway. AI produces a result, and results are what matter, and if you’re waiting for quality to factor significantly into that equation, I’ve got some bad news about the last 40 years of professional software development for you.

There are plenty of people I know — they’re not all professional programmers, but most are; people I respect and admire and envy — who have enthusiastically embraced this particular steam engine. Paul Ford wrote a wonderful essay about both his qualms and his excitement — Qualms: 4, Excitement: 6, final — and if what was being replaced wasn’t the basis for my definition of self, I might feel the same. I can ignore moral, ethical, social, environmental, and economic externalities just as well as the next guy.

But I am a programmer. Just like I’m a father and a husband and a son and a friend. It’s not something I do, it’s something that is fundamental to the core of my being. Like overly dramatic phrasing.

I got into computers because solving puzzles was fun, and building worlds was fun, and making things — the process of making things — was fun, down at the granular level. It was nice to have something at the end, but the act of creation was the exciting part. I suspect that predilection will begin to disappear (in commercial environments, at the very least), now that the people who do it — who want who do it — can be replaced. The journey actually was the reward for some subset of weird little freaks, but you can now skip all that crap and just jump to the end and get on with it.

People will argue that speaking English to LLMs is just another level of abstraction away from the physics of how the machine actually works. And while that’s technically true — the worst kind of true — it also misses the point. Industrialization fundamentally changes things, by quantum degrees. A Ding Dong from a factory is not the same thing as a gâteau au chocolat et crème chantilly from a baker which is not the same thing as cramming chunks of chocolate and scoops of whipped cream directly into your mouth while standing in front of the fridge at 2:00am. The level of care, of personalization, of intimacy — both given and taken — changes its nature. Digging a trench is a very different thing that telling someone to dig a trench. Assembling a clock is a very different thing than asking Siri for the time.

I was lucky enough to have a trench-digging enthusiasm when it was economically advantageous to do so. I managed to pretty much exactly hit the window when deep-nerd brain chemistry could produce a viable, even lucrative, career. I am fortunate to be able to lean into an early senescence and walk (or be pushed) away, as what I want to do and what the world wants me to do diverge.

It still makes me sad, though, that what I’ve spent 45 years of my life toiling at will likely end up as a footnote, the providence of folksy artisans and historical reenactors. I didn’t leave a dent in the universe so much as splatted against it. The world no longer has a need for what I somewhat sardonically call my art. We are all product managers now, pleading with obtuse underlings to go back and try again and to get it right this time. I remain a father and husband and son and friend, but the need for what I can do — the need for what programmers can do — is shrinking, and my conception of myself and my usefulness along with it.

There will be more software than ever, as its production is automated; we are entering the industrial age of the digital age. But less of this code will be elegant, or considerate, or graceful. Less of it will be created by removing what isn’t David, and less of it will be driven by a human understanding of human needs.

That was something I did that mattered. I’ll miss it.

Read the whole story
jsled
1 day ago
reply
South Burlington, Vermont
Share this story
Delete

Good Riddance

1 Share

The man responsible for iOS 26, the worst usability regression in Apple history, has jumped ship for Meta, and presumably taken top minions with him.

Alan Dye, whose background before becoming Apple’s software interface design exec was brand design and print advertising, about as far from human computer interface work as you could find in the design space.

iOS 26’s and the even less well thought out MacOS 26 Tahoe interfaces have been ridiculed by many for terrible legibility from Dye’s widespread use of transparency effects. It was so bad that the core feature of the design called “Liquid Glass” was quickly amended with a user preference for a “Tinted Glass” that returned some legibility to the interface.

Dye doesn’t understand interface design and his work at Apple regressed their UI progress by more than a decade. He wasn’t even liked or trusted at Apple and had a terrible reputation across the broader interface design community. Apple blogger John Gruber had this to say of him.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray.

A fraud. Let that sink in. Apple, renowned for its user interfaces for more than half a century, put a man who widely considered a fraud in charge of user interface. And though it was former Apple VP of industrial design, Jony Ive, that put Dye in charge, Tim Cook is ultimately responsible.

Tim Cook has been terrible for Apple customers. His product instincts are worse than useless, the 10 years and $30B blown on his utterly failed attempt at a product legacy, the Vision Pro VR goggles, is as clear an indication of that as any. But many critics at least gave him credit for building a solid team. That was misplaced. Most of Cook’s deputies over the last 15 years have been pushed out or left willingly after spectacular failures, across nearly every part of the business from AI to design to retail to Siri and search.

Tim Cook’s only customer for his entire tenure at Apple is Wall St. Steve Jobs’ supply chain expert, Cook has inverted Apple’s approach, molding existing products to more closely fit Apple’s supply chain rather than managing suppliers to support new and innovative products.

Tim Cook doesn’t give a shit about users, only that Wall St. is happy. Cook has been great for investors, terrible for customers. At least today we can find some comfort that another one of his failures, Alan Dye, is no longer around to wreak havoc on even more of SJ’s products and legacy.

Read the whole story
jsled
74 days ago
reply
South Burlington, Vermont
Share this story
Delete

The New Yorker’s Isaac Chotiner Interviews Santa Claus

1 Share

For several centuries, Santa Claus has been one of the most prolific mythical gift-givers in the world. Formerly known as Saint Nicholas of Myra, a man whose works included reviving the bodies of three children slain by a serial murderer, Santa Claus reinvented himself in the mid-1800s as a jolly Norwegian-style figure of merriment, whose generosity was based on the recipient’s moral acuity.

I recently spoke with Santa Claus, who is currently coordinating his staff of immortal blue-collar elves, about the morality of children and his friendship with a creature whom many carolers consider a war criminal: Krampus.

You have chosen to spend every Christmas Eve flying around the globe giving gifts to all of the, and I’m quoting here, “good girls and boys.” Why did you decide that only good children deserve gifts?

I wouldn’t say that I made the decision. I’d say that I’m following the traditions set forth by Christianity and other religions in which acts of good are rewarded, while acts of bad are punished. Christmas is a fun way to teach children that being good and kind can lead to positive results, even if being mean or a bully feels better in the moment.

And so-called bad children deserve nothing.

That’s not what I’m saying, Isaac. I’m saying that the better a child behaves, the better the gift they get. There are degrees of bad. A child who won’t play with his little sister might not deserve a Nintendo Switch 2, but, like, a baseball glove? Sure. I can do that.

So if a very wealthy child gets everything they requested and a poor child does not, are you positing that the wealthy child is morally superior to the one who lives in poverty?

No! What I’m saying is that overall—and I just mean overall—whether a child is naughty or nice does have an impact on what they receive. You know, in general.

But you do make the list yourself, and you are the one who checks it twice.

Yes, of course. It’s in the song.

So you are, in fact, the one who decides which children deserve nice things and which don’t.

That’s unfair. I’m saying that, through the magic of Christmas, I can understand the heart of each child and through that special bond—

Can you see into the heart of every child?

Yes.

I have to stop you there for a second because you just said something interesting. If you can see into every child’s heart throughout the year, don’t you feel that you have a moral obligation to help them when they’re in crisis rather than waiting for December to give them a Hatchimal?

Look, you have to understand that the magic of Christmas is limited.

Limited to flying to approximately 2.6 billion Christmas homes in one night and changing your body’s shape to slide down chimneys?

I didn’t say it’s not powerful magic, Isaac. I said it’s limited magic.

I’m just trying to understand how you can run a magic workshop all year long, raise magic reindeer all year long, watch children’s deeds all year long, but your ability to act is limited to a few hours.

Yes. Whether you want to believe that’s true or not, it’s true.

What I also struggle to believe is that you are the self-described arbiter of naughty and nice, but you are close to Krampus.

I don’t know if I’d say we’re close.

There are Christmas cards with you both on the front.

Yes, we both work on the same holiday. We’re both tasked with making Christmas truly magical.

By snatching the children from their beds and taking them to hell?

You’re giving an example of the most extreme situation and making it sound like the norm.

But that does happen on occasion, you agree?

Yes.

And those children are getting dragged to hell by Krampus on the one night that you said you could do something. But you don’t. Why?

Because Krampus has his role and I’ve got mine! I think it’s weird that the Tooth Fairy takes teeth, but that’s not my job either.

So your job is to judge people, but not to judge people for judging people.

You’re making it sound like I approve of Krampus’s methods. I don’t. Just because you share a holiday with someone doesn’t mean you agree with them on everything. I love kids.

Santa Claus, thank you so much for doing this.

Great, thanks. If we go light on the Krampus part, I wouldn’t complain, because it could dwarf everything else.

Read the whole story
jsled
83 days ago
reply
South Burlington, Vermont
Share this story
Delete

Welcome to the Slopverse

1 Share

Bill Lowery, a sales executive, is confused when a workmate asks where he should take a date out for dinosaur. “You’re planning to take this girl out for dinosaur?” Lowery asks. “That’s right,” the colleague responds, totally nonchalant. Lowery presses him, agitated: “Wait a minute. You’re saying dinosaur? What is this, some sort of new-wave expression or something—saying dinosaur instead of lunch?” When Lowery returns home later in the day, his wife reports on their sick son while buttering a slice of bread. “He’s so pale and awfully congested—and he didn’t touch his dinosaur when I took it in to him.” The salesman loses it.

This is the premise of “Wordplay,” an episode of the 1980s reboot of The Twilight Zone. As time progresses, people around Lowery begin speaking in an even more jumbled manner, using familiar words in unfamiliar ways. Eventually, Lowery resigns himself to relearning English from his son’s ABC book. The last scene shows him running his hands over an illustration of a dog, underneath which is printed the word Wednesday.

“Wordplay” offers a lesson on the nature of error: Small and inconspicuous changes to the norm can be more disorienting and dangerous than larger, wholesale ones. For that reason, the episode also has something to teach about truth and falsehood in ChatGPT and other such generative-AI products. By now everyone knows that large language models—or LLMs, the systems underlying chatbots—tend to invent things. They make up legal cases and recommend nonexistent software. People call these “hallucinations,” and that seems at first blush like a sensible metaphor: The chatbot appears to be delusional, confidently asserting the unreal as real.

But this is the wrong idea. Hallucination implies that a mistake is being made under a false belief. But an LLM doesn’t believe the “false” information it presents to be true. It doesn’t “believe” anything at all. Instead, an LLM predicts the next word in a sentence based on patterns that it has learned from consuming extremely large quantities of text. An LLM does not think, nor does it know. It interprets a new pattern based on its interpretation of a previous one. A chatbot is only ever chaining together credible guesses.

[Read: The AI mirage]

In “Wordplay,” Lowery is driven mad not because he is being lied to—his colleague and wife really do think the word for lunch is dinosaur, just like a chatbot will sometimes assert that glue belongs on pizza. Lowery is driven mad because the world he inhabits is suddenly just a bit off, deeply familiar but jolted from time to time with nonsense that everyone else perceives as normal. Old words are fabricated with new meanings.

AI does invent things, but not in the sense of hallucinating, of seeing something that isn’t there. Fabrication can mean “lying,” or it can mean “construction.” An LLM does the latter. It makes new prose from the statistical raw materials of old prose. The invented legal case and the made-up software are not actual things in the real universe but credible—even plausible—entities in an alternate universe. They are, in another word, fictional.

Chatbots are convincing because the fictional worlds they present are highly plausible. And they are plausible because the predictive work that an LLM does is extremely effective. This is true when chatbots make outright errors, and it’s also true when they respond to imaginative prompts. This distinctive machinery demands a better metaphor: It is not hallucinatory but multiversal. When generative AI presents fabricated information, it opens a path to another reality for the user; it multiverses rather than hallucinates. The fictions that result, many so small and meaningless, can be accepted without much trouble.

The multiverse trope—which presents the idea of branching, alternate versions of reality—was once relegated to theoretical physics, esoteric science fiction, and fringe pop culture. But it became widespread in mass-market media. Multiverses are everywhere in the Marvel Cinematic Universe. Rick and Morty has one, as do Everything Everywhere All at Once and Dark Matter. The alternate universes depicted in fiction set the expectation that multiverses are spectacular, involving wormholes and portals into literal, physical parallel worlds. It seems we got stupid chatbots instead, though the basic idea is the same. The nonexistent legal case that AI suggests could exist in a very similar universe parallel to our own. So could the fictional software.

The multiversal nature of LLM-generated text is easy to see when you use chatbots to do conceptual blending, the novel fusion of disparate topics. I can ask ChatGPT to produce a Charles Bukowski poem about Labubu and it gives me lines like, “The clerk said, they call it art toy, / like that explained anything. / Thirty bucks for a goblin that grins / like it knows the world’s already over.” Even as I know with certainty that Buk never wrote such a poem, the result is plausible; I can imagine a possible world in which the poet and the goblin toy coexisted, and this material resulted from their encounter. But running such a gut check against every single sentence or reference an LLM offers would be overwhelming—especially given that increasing efficiency is a major reason to use an LLM. Chatbots flood the zone with possible worlds—“slopworlds,” we might call them, together composing a slopverse.

[Read: AI’s real hallucination problem]

The slopverse worsens the better the LLMs become. Think about it in terms of multiversal fiction: The most terrifying or uncanny alternate universes are the ones that appear extremely similar to the known world, with small changes. In “Wordplay,” language is far more threatening to Bill Lowery because familiar words have shifted meanings, rather than English having been replaced by a totally different language. In Dark Matter, a parallel-universe version of Chicago as a desolate wasteland is more obviously counterfactual—and thus less uncanny—than a parallel universe in which the main character’s wife had not given up her career as an artist to have children. Parallel universes that wildly diverge from accepted reality are easily processed as absurd or fantastical—like the universe in Everything Everywhere All at Once where people have fingers made of hot dogs—and familiar ones convey subtler lessons of contingency, possibility, and regret.  

Near universes such as the one Lowery occupies in The Twilight Zone can create empathy and unease, the uncanny truth that life could be almost the same yet profoundly different. But the trick works only because the audience knows that those worlds are counterfactual (and they know because the stories tell them directly). Not so for AI chatbots, which leave the matter a puzzle. Worse, LLMs are functional rather than narrative multiverses—they produce ideas, symbols, and solutions that are actually put to use.

The internet already acclimated users to this state of affairs, even before LLMs came on the scene. When one searches for something on Google, the resulting websites are not necessarily the best or most accurate but the most popular (along with some that have paid to be promoted by the search engine). Their information might be correct, but it need not be in order to rise to the top. Searching for goods on Amazon or other online retailers yields results of a kind, but not necessarily the right ones. Likewise, social-media sites such as Facebook, X, and TikTok surface content that might be engaging but isn’t necessarily correct in every, or any, way.

People were misled by media long before the internet, of course, but they have been even more since it arrived. For two decades now, almost everything people see online has been potentially incorrect, untrustworthy, or otherwise decoupled from reality. Every internet user has had to run a hand-rolled, probabilistic analysis of everything they’ve seen online, testing its plausibility for risks of deception or flimflam. The slopverse simply expands that situation—and massively, down to every utterance.

Faced with the problems a slopverse poses, AI proponents would likely make the same argument they do about hallucinations: that eventually, the data, training processes, and architecture will improve, increasing accuracy and reducing multiversal schism. Maybe so.

But another worse and perhaps more likely possibility exists: that no matter how much the technology improves, it will do so only asymptotically, making the many multiverses every chat interaction spawns more and more difficult to distinguish from the real world. The worst nightmares in multiversal fiction arrive when an alternate reality is exactly the same save for one thing, which might not matter, or which might change everything entirely.

Read the whole story
jsled
91 days ago
reply
South Burlington, Vermont
Share this story
Delete
Next Page of Stories