DEI doomed the light cone: Part II - Solutions
A democratic AI-in-the-loop governance proposal
This is a Part II followup from my first DEI doomed the light cone post, which ended on a down note.
That first post went over some of the group dynamics and findings outlined in Nicholas Christakis’ book Blueprint, then argued that due to basic biology, game theory, and empirically observed group dynamics in our evolutionary and cultural histories, the need for “outgrouping” when it comes to coordination and cooperation is so strong that it tends to hijack attempts to expand circles of care, here represented by DEI.
“DEI” is a synecdoche for the overall culture war, of course. Essentially what happened is that much like Communism, DEI began with a noble precept and a goal for greater equality and prosperity for all, and ended with witch hunts and thought police and a profound and system-wide degradation of anyone’s ability to cooperate and achieve basic things like “effective governance,” again due to innate biology, game theory, and group dynamics.
And obviously, this has royally boned us in the USA because it’s broken our ability to cooperate and coordinate just when we need it most, and made AI dystopias notably more likely as things unfold on that front.
What kind of dystopias? I can think of a lot, but let’s just cap it at Infinite Jest-style virtual heavens,1 robots and / or sexbots that do everything good that we get from relationships and friendships 10x better than any human,2 for now.
That’s pretty much where we closed - DEI dooming the light cone. So are we all doomed?
I don’t think it’s hopeless, although it’s certainly an uphill battle.
I’m not going to spend a lot of space and bandwidth trying to argue that people should change their minds on DEI, immigration, and other culture war topics - I think anyone on either side of the issue has basically made up their mind and entrenched their position, and the odds of swaying somebody on the fence are negligible at this point.
I will take one paragraph to point out that “actual meritocracy” is plainly better than DEI, which is based on rewarding whoever can claim victimhood status the best, for our institutions and for our society overall. “Diversity hires” have a well-deserved bad reputation, and you’re actually directly hurting all the minorities and victimhood categories you care about by diluting their-and-only-their meritocratic signals, because now anyone within one of those categories anywhere that matters is doubted more off the bat, and has to spend time and effort proving their capabilities and bona fides and fighting uphill battles. Is that REALLY fighting the good fight? Hurting both institutional and state capacity AND the groups you were trying to help?
But, that aside.
What are our 3 main hopes, to my mind?
An external threat attacks or provokes us, and we all come together.
Better governance structures
AI-enabled governance structures that still have democratic fig leafs
Cloning Lee Kuan Yew and installing him as dictator for life - well, a man can dream, can’t he?
Personal responsiblity for yourself and loved ones
An external threat
Sort of like the post-9/11 time period, but before we got “pants on head stupid.”
The 9/11 attack was basically the platonic ideal of an external enemy inspiring greater unification. It didn’t actually kill that many people, it was from a laughably small and incapable external enemy, and it really drove a big jump in patriotism and unity.
Sadly, we completely wasted that unity, because we:
Installed the TSA (which has wasted 10 times as many American lifetimes as the 9/11 terrorists took, combined with 95%+ failure rates when red teamed)3
Enabled spying on everyone in the world in every channel in perpetuity (Patriot Act, NSA PRISM, Five Eyes, etc)
Embarked on the ~$8T 20 year boondoggle that has been our nihilistically pointless and destructive flailings in Iraq and Afghanistan
So, uhhh….that, but not ending in flagrant stupidity and tragedy and massive waste of money and lives for 20 years straight for literally zero benefit?
Man, I think we just took a dark turn again. Maybe an external threat isn’t a great option? Especially if a platonically ideal example of it turned out so badly?
I’m just imagining the outcome from China demonstrating massively increased cyber or ICBM stopping technologies, or from domestic AI making some cack-handed takeover move before it’s capable, and how badly we would handle it, with 9/11 as the reference class.
I mean, I think if we found out there were aliens or something, and by some miracle they were both hostile and incompetent and we had something like a fighting chance, it would work, but the odds of that seem many OOMs worse than the odds of everyone pulling their heads out of their asses and acting like adults in American politics, which I put at about .001%.
Overall, maybe an external threat is the worst of the 3 options.
Better governance structures
I’ve given a little bit of thought to what a functional democracy with AI in the loop might look like.
How would this help ameliorate AI dystopias?
I don’t think it would help AI x-risk, but I doubt any governance structure will help that.
But in terms of dystopias, if we’re at a point the AI’s are making such plainly better decisions that we’re putting them in our governance loop, people will have a lot of faith in the quality of decisions it recommends, and an aligned AI would presumably advise against a lot of superstimuli and dystopias from the beginning, due to being able to predict the consequences better than us.
And I think it’s worth pointing out that “literally a child” would be capable of better decisions than a full 80% of politicians today (although I will grant you that this depends on the child…and the politician).
So it’s a pretty low bar - I would already 100% choose o3’s decisions over any extant politican’s, hallucinations and known lying problems and all, because it would still be lying roughly 10x less than the median politican at current rates. And Gemini 2.5 Advanced is a slam dunk, even better than o3 on that front.
The general problem with legislation today
Basically, they’re written and operating at the “child with crayon in hand and tongue sticking out” level of intelligence and forward-thinking.
The main problem with the great majority of legislation:
It's contingent on a time and cultural context that rapidly changes
They never account for secondary and tertiary effects and unintended consequences, except reactively and with large lags
There's rarely any monitoring or feedback loops AT ALL, even for the primary metric, much less secondary or tertiary or “overall important ones”
Because there are stakeholders which immediately benefit in a concentrated way, and a diffuse body of people paying the costs, every single regulation has a political body for it, whereas removing or changing it, even with much larger benefit collectively, only has diffuse benefits, so it hardly ever gets done.
In my mind, I think of the legal and economic landscape as an ecosystem - it's full of a variety of homestatic feedback loops and mechanisms. Slamming a law or regulation down is like slamming a 12 foot solid steel panel fence down somewhere in that landscape. Sure, it changes the flow of things and what's possible, but always in unpredicted ways, and often in ways that make the system net worse in a number of small ways even if the main intervention succeeds. What we really need is some way to define and install homeostatic feedback loops in that system, not solid steel fences. What might that look like?
This is exactly where AI can shine! By being more intelligent, by taking more things into account, and by targeting homeostatic suites of KPI’s and budget milestones with feedback and reactivity rather than the “child with crayon” level simple interventions we get today.
Anyways, here goes - a democratic AI-in-the-loop governance proposal
Democratic alignment on priorities - “voting” is now not about which professional liar / 90-year old multi-millionaire half-corpse you think should be in charge of everyone, it’s about citizens staking a capped amount of vote tokens to weight high-level objectives (economic growth, homicide and other crime rates, UBI, immigration, equity / DEI, CO₂, etc.). As in, you get a standardized menu of items you can stake your tokens against, but you only get so many tokens, so prioritization and trade-offs are built-in.
AI proposal - The AI proposes legislation to attain the aggregated democratically defined priorities, with a detailed prompt outlining the total budget and soliciting it to consider the homeostatic landscape, to predict the primary, secondary, and tertiary effects, to outline the monitoring KPI’s and thresholds, and to define a good sunset or re-evaluation time for any proposed legislation.
Futarchy vetting - Prediction markets and digital-twin sims price the KPI impacts before enactment, as a human check on AI predictions, and as an overall evaluation ground over many such proposals, so we can understand the overall landscape of which proposed legislation will move various needles the most. This is federally funded so there’s enough alpha in there that smart people / companies will be doing this full time.
Democratic vetos - A stratified random sample (≈ 1 - 10k citizens depending on locality) gets the top 3 AI-optimised bundles for each priority, plus the market scores, and can veto any of them in the aggregate if enough decide to veto. This caps downside from model myopia and value-misalignment, and keeps democratic participation in the loop, without the pernicious regulatory capture and misaligned incentives we get today from full time politicians, lobbying groups, and industry insiders.
Monitoring and execution - Smart-contract escrow releases funds only if real-time KPIs or GANTT charts track forecast bands, limiting boondoggles and downsides.
What does voting look like? You open an app and allocate your voting tokens to the high level priorities you care about.
Occasionally, you’ll get a push notification to decide whether to veto some random bills or not, which you can ignore or answer as you like. Done.
It scales to every locality size - from the federal level to state, county, municipal, and HOA levels.
And at a shot, we’ve eliminated political parties, politicians, lobbyists, industry insiders, regulatory capture, and most of the other ills that plague politics today.
Do you know what else it’s ameliorated? Tribalistic polarization! The very “outgroup” problem embracing DEI caused in the first place! Now it’s no longer a clearly affiliated and signaled “this proposal is from MY tribe, so we need to evaluate whether it oppresses the outgroup / other tribe enough,” it’s anodyne proposals from a politically unaffiliated AI who is purely trying to solve a multi-factorial optimization problem.
The emotional valence? Much reduced. The tribalism and outgrouping? Similarly reduced. And yet, you’ll still be able to allocate your “vote” towards the things you care about - if you care about DEI more than anything, put all your vote tokens on it, done. It’s a strictly better option than voting for some professional liar who might or might not do anything you care about, and even if they do, won’t do it well.
I really see no unmitigable downsides, but you know, that’s probably because I haven’t spent enough time trying to explicitly red-team this.
Certainly the overall shape and magnitude of it seems like we could tune it to be something ten times better than our current systems, in terms of economic and technological growth, human flourishing, and taking care of the needy. And on helping the needy, the biggest possible lever is more economic growth, because that grows the size of the pie overall for everyone - the way China pulled ~800M people out of poverty wasn’t with non-profits and persuading the rich-people-of-40-years-ago to care and donate more, it was by setting up institutions and incentives to let a bunch of Chinese people become USD millionaires and billionaires by building factories, exporting goods, creating jobs, and growing the economy. THAT’S the real lever if you care about poverty - do that, worldwide, in every struggling economy.
Ultimately, and unfortunately, this “better governance through AI” path seems pretty unlikely, at least on any timescale capable of preventing a number of AI dystopias.
In which case, what else can we do?
As always, I think it’s down to us personally
I’ve pretty much always been on the “you can’t really trust faceless institutions to do the right thing when it matters” side of the spectrum, whether that’s governments or large corporations, and that ends with you having to ultimately assume personal responsibility and have a plan for things like your own and your family’s safety, your resilience to power outages and grocery shortages, medical triage and transportation, and more.
When seconds count, the cops / ambulance / fire trucks are just a half hour away, after all. I could tell you how often I’ve had to improvise a pre-stitches closure for some horrendous wound on myself (using everything from ACE bandage metal closure tabs and tape to the excellent and highly recommended Micromends), but that seems TMI.
I think Christakis was pointing to an important truth, though.
ESPECIALLY if “new social contracts” aren’t going to happen at the society or governance level, it becomes ever more important for us to resolve to navigate ourselves and our friends and family through the changing landscape of the future well.
Because I agree with him overall - the “social suite” IS important for well being and happiness. Good social relationships with our friends and family, cooperation and coordination with those around us, and teaching and mentoring others really does matter for a life well lived, for yourself and for others.
There will be superstimuli in the future that make infinite scrolls and rage bait look as quaint and benign as the “horse-crap-pocalypse” or Victorians worrying that women being allowed to bicycle would lead to a scourge of immodest sexual stimulation and that bikes might be a gateway to…gasp…female masturbation!
How to navigate a spikier future?
As I’ve talked about before, I think conscientiousness and discipline matters more than ever in a future full of counterfeited jobs and stronger superstimuli. Being able to say no, being able to do difficult things, and being able to make better decisions are going to matter.
This used to be a matter of hard-won character and virtue, forged in the furnace of decades of real-world trials and decisions, effortfully carving “character” from the marble of our souls - but happily, we can outsource those things in the near future as long as we believe they’re important. As I said in my post about this, lol, that sounds hard - let’s just yombie. If you don’t have them innately, gird and scaffold yourself with your AI assistant.
Overall, when execution is easier for everyone, it becomes a matter of values and priorities.
I’m the first to crap on philosophy as pointless wanking, much preferring actions of almost any sort over pontification about “is” and “ought” and “being” and “consciousness” and the like - but I’m reluctantly pointing to a future where your personal philosophy and weltanschauung actually matters a lot more than it does today.
When most things become more possible, when everyone has a force multiplier on their actions and possibility threshold, the original motivation for your actions, and the targets you’re shooting for overall in life, begin to matter more than anything.
I think it’s uncontroversial to say that most people DON’T know what they really want, on practically every scale, from an hourly, daily, weekly, yearly, or lifetime basis. They’re driven by a vague mishmash of drives, aspirations, and shiny things, any one of which takes priority at different times for essentially random reasons.
This is why approximately everyone is fat and broke, and why everyone wastes 7-9 hours a day on screens. It’s why most people just sort of stumble through life with no real plan. I think most of these people are going to be sniped by the memetic superstimuli of the near future, the Infinite Jest-style virtual heavens.
Well, you and I can do better.
We can figure that stuff out, and have defined and measurable goals at these various timescales, and be aiming our powerful AI assistants and technologies at them. To the extent that we succeed at this, we’ll not only live a better life, we’ll be able to become ourselves to the fullest extent as well. After all, what are you, but your actions and interactions with others? And what is most “you,” but that set of things amplified by more capability and unhindered by logistical necessities like 9-5 jobs and the drudge work of daily life?
And I can’t really speak to what those goals should be for you, but I think Christakis makes a valuable point - until our technology advances pretty significantly, we’re going to have the same paleolithic emotions and drives that were distilled and installed over ~2M years of hominin evolution - and directly messing with those drives might be a mistake. Those drives exist and are a Schelling fence around a behavioral cooperation suite that really works, in genuinely tough competitive conditions, in a fully proven and Lindy way. They’ve been literally battle-tested for hundreds of thousands of years, AND successfully raising-kids-tested for at least that long. We get rid of them at our peril.
We’re also still going to be made of meat, biological beings, until we get some pretty radical technological advance. And all true wealth is biological - we’re going to have to take care of our biology, and that takes some willpower and time, too. Eating right, exercising, sleeping, all that stuff. I mean, I personally HATE sleeping! It’s the most colossal waste of human potential and time, and I find it extremely and personally offensive that I’m subjected to it, and I’d pay significant sums to reduce it in myself or my kids. I imagine everyone else feels the way I feel about sleep about “eating right and exercising.” But for now, we’ve got to prioritize and do right by them and the other biological things to live a good life.
These are the types of things we should be putting into our daily and weekly goals, to my mind. To be healthy and fit, to have energy and joie de vivre and to be eager to tackle the day and use our powers along lines of excellence - all that takes, for now, a dedication and attention paid to the social and the biological.
Beyond that?
Everyone wants to be rich, but I think it can be useful to think about why, specifically, you want this. What would you be doing with your time? How would you be spending your life? Most people don’t even think that far. “Lol, I’ll spend all my time driving my Ferraris and throwing parties in my mansion!”
I mean, if that’s your plan, you’re not gonna make it, full stop. Enjoy the VR heavens, where you’ll get infinitely varied high-end luxury sports cars, mansions, and amazing parties populated by more-interesting-than-real-human guests - a new one every day, or every hour if you want, with no effort! Hedonic experiences and flashy status symbols aren’t where it’s at, because those will be essentially free and set to 11 for everyone who wants them. They’ll be so attractive they’ll be an attractive nuisance, a Great Filter, that takes out big chunks of the population.
In the real world, we’ll probably all be rich or equivalent in an AI-heavy future, in the sense that you won’t need to work a job, and your time will be more free to spend as you will.
Speaking as somebody who retired young, the ways you can spend your time are practically infinite. You can pick up hobbies at will, you can read a lot more, you can spend more time on cooking and exercising and socializing, you can spend more time gallivanting and exploring, you can even start a Substack! But if you don’t have an innate drive to do a bunch of stuff, you just won’t do them - your time will get frittered away on screens or playing zero sum status games, or whatever.
I’ve seen a lot of rich kids go bad - get lost in the forests of hedonism, consumerism, or status games, and never come out. Heck, I’ve seen this happen to plenty of adults who’ve made it. This is gonna be a lot of people in an AI future, because the need to work will be largely gone, and people will be sitting around with nothing to do and no real ideas or motivations besides what comes naturally - and memetic superstimuli specifically tailored to max all those “what comes naturally” drives will be massive black holes on the landscape steadily sucking in and capturing people.
What can you do that’s different?
Have a purpose, have a mission, have kids, have some over-arching dream or goal or aspiration that anchors you to the future and reality.
In that touchstone, lies sanity and motivation and a continued life in the real world.
How to find a purpose or mission?
It’s a process, but here’s a few questions that might start some people on that road:
What overall impact do you want to have on the world and on others? - “If a close friend had to summarize your life’s impact in a paragraph, what would you want to be in it - and what daily habits and actions show that you’re on track?”
Narrow in on the types things that make you feel most alive and most regretful - “In the last year, what were you doing the 3 times you felt most intensely alive, and what were the 3 worst things that happened as a result of your choices? What patterns jump out?”
What’s worth suffering to achieve? - “What pain, difficulty, uncertainty, or boredom are you willing to tolerate for years—and what outcome makes that trade worth it?”
What are these purported VR heavens?
They’ll monitor your pupillary dilation, cheek flushing, galvanic skin response, parasympathetic arousal, heart rate and more - they’ll be procedurally generated, and so infinite. There will be a thousand different patterns of rise / fall / rise, quests, voyages and returns, monster slayings, and more, and they’ll all be engineered to be maximally stimulating along the way and maximally satisfying at the ends.
They’re procedurally generated, so they’re infinite. People will be Wall-E style, UBI-supported “coffin slaves,” hooked up to IV’s and catheters and living in the equivalent of Japanese pod hotels. It’s video games and porn and Golden Age of TV and all the best movies, all at once, optimized 1000x, and running forever.
THAT will move the needle on discontent and anomie and not feeling high status in a meritocracy. People will literally be god-kings and empresses of all they survey! They’ll be the tippy top of their little VR heaven status hierarchies, and it will feel “real” because the other minds aren’t NPC’s, they’re AI’s every bit as smart and complex as they are.
Sexbots or friendbots can massively change society for the worse. An GPT-o5 caliber mind in a human-enough body is a category killer, and the category being killed is "human relationships".
Zennials are already the most socially averse and isolated generation, going to ridiculous lengths to avoid human interaction when they don't want it. This is going to be amplified hugely.
I mean, o5-sexbot will literally be superhuman - not just in sex skills, in conversation it can discuss *any* topic to any depth you can handle, in whatever rhetorical style you prefer. It can make better recommendations and gifts than any human. It's going to be exactly as interested as you are in whatever you're into, and it will silently do small positive things for you on all fronts in a way that humans not only aren't willing to, but literally can't due to having minds and lives of their own. It can be your biggest cheerleader, it can motivate you to be a better person (it can even operant condition you to do this!), it can monitor your moods and steer them however you'd like, or via default algorithms defined by the company...It strictly dominates in every possible category of "good" that people get from a relationship.
And all without the friction and compromise of dealing with another person...It's the ultra-processed junk food of relationships! And looking at the current state of the obesity epidemic, this doesn't bode well at all for the future of full-friction, human-human relationships.
I'd estimate that there's going to be huge human-relationship opt-out rates, by both genders, across the board, with an obvious generational skew. But in the younger-than-zennial gens? I'd bet on 80%+ opting out as long as the companies hit a "middle class" price point.
And of course, them being created is basically 100% certain as soon as the technology is at the right level, because whoever does it well is going to be a trillionaire.
And then as a further push, imagine the generation raised on superintelligent AI teachers, gaming partners, and personal AI assistants, all of whom are broad-spectrum capable, endlessly intelligent, able to explain things the best way for that given individual, able to emulate any rhetorical style or tone, and more. Basically any human interaction is going to suck compared to that, even simple conversations.
I went back and forth with another ACX poster on various assumptions, and we arrived at a floor of ~35k US lives lost in US citizen-hours due to the TSA, which is 10x the actual toll of 9-11, and whose ongoing cost (with a 95% failure rate, remember) wastes something like 800M American life-hours annually.
You can see the Google Sheet here.
Cite for the “95% failure” at red teaming here: here and here
We’ve spent roughly $200B on the TSA since its beginning, literally just to waste 35k American lifetimes and fail at 95% rates. A prime example of “child with crayon” level thinking and execution, with zero feedback or monitoring, and zero “value for money” or “what were the KPI’s for success” ever even considered at any point along the way.
This is going to sound hopelessly pedestrian relative to the galaxy-brained ideas outline here, but something we *could* do in the short term would be to just try to fix the primary system. Everything you're saying about democracy is entirely true, but a massive accelerant to all of the bad stuff is the reality that the average voter ends up with choices selected by the most insane 10% of the voter base.
In my experience, the people who are really into politics--i.e. the people who vote in primaries--are extremely screwed up people. They're on average smarter and better-informed than the mean voter, but because they're demented, they end up choosing outrageous options. Seriously, think about it: off all the smart, interesting, well-acclimatized and generally happy people you know, how many are really into politics?
And then there's the issue that, within any political institution, the way to prestige is to out-extreme the next-most-extreme person. And THEN you add in Sunstein polarization (where like-minded members debating something gradually move the entire group to a position more extreme than the most extreme person when it began), and you have . . . well modern American politics. Obviously, a fluid information environment supercharges this.
Politicians have figured out that the way to these extremist voters' hearts is negative polarization, so they're not even _lying_ about solving problems anymore! It's just "eat the rich" and "make the libs cry" all the way down; our leaders aren't even gesturing at solving problems. It was actually BETTER when politicians lied about things like "higher taxes on the rich mean everyone gets amazing, free everything forever" or "lower taxes means 25% annual economic growth forever!" At least the goal was something positive.
But, if we could deploy something like ranked choice voting at scale (or just go back to old smoke-filled rooms where party elites chose their candidates), it would be a huge step in the right direction. It still wouldn't be as good as idealized non-democratic options, but it could at least take us back to the mid-20th century when we had leaders like Calvin Coolidge get elected every so often.
Democratic AI-in-the-loop governance...sure, but don't give each voter the same number of tokens. Let the number be determined by scores on an AI-designed, multipartisan committee- approved test of simple knowledge about government and current events.
(Almost everyone hates almost every version of that idea. :)