Afleveringen

  • Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 25,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    This week brings news from Boston Dynamics and the Chinese Academy of Sciences. The message common to both stories? The humanoid robots are coming.

    Meanwhile, the internet reacts to Appleā€™s new Vision Pro headset.

    And the FCC take action against a California company that used AI to create fake phone calls from President Biden.

    Letā€™s go!

    šŸ¤– Robots are go

    This week, yet further signs that the robots will soon walk among us. I mean, all of us.

    The Boston Dynamics humanoid, Atlas, has been a regular in this newsletter over the years. Recently it has been overshadowed by competitors, including the Digit humanoid by Agility Robotics and Teslaā€™s Optimus.

    But this week Boston Dynamics released a video that shows Atlas picking up automotive struts and placing them in a flow cart.

    The team say Atlas is using onboard sensors and object recognition to perform the task. The footage is short. But it marks a significant advance for Atlas, because previous videos have shown the robot doing elaborate dances rather than useful work, and those dances have been pre-programmed rather than autonomous.

    Meanwhile, in Beijing a research team at the Institute of Automation in the Chinese Academy of Sciences this week debuted their Q Family of humanoid robots.

    The research team have reportedly built a ā€˜big factoryā€™ for the design and manufacture of Q Family humanoids.

    Back in New Week #124 we saw how the CCP has ordered ā€˜domestic mass productionā€™ of humanoidsā€™ to fuel economic growth. Remember, this is the underlying demographic reality that has China dashing towards robots.

    āš” NWSH Take: In last monthā€™s Lookout to 2024 I said this would be the year of the humanoid. We closed out 2023 with the announcement that the Digit humanoid had started a trial inside US Amazon fulfilment centres. Days after I published the Lookout, BMW announced a trial of Digit in its California manufacturing plant. Now, the Boston Dynamics team are clearly eyeing commercial applications, too. Their Atlas robot has so far remained a research project; the question theyā€™ll have to answer if they want to change that is whether Atlas can match Digit and Teslaā€™s Optimus for autonomous capability. // The graph above tells the underlying socio-economic story here. Both the CCP and innovators in the Global North know that working age populations are falling. If economic growth isnā€™t to become a distant memory, we need new armies of autonomous workers. AI applications can handle some of our knowledge work. But weā€™ll need humanoids to do some of the physical work that currently only people can do. The CCP see this as an existential imperative; they know they must maintain GDP growth. For innovators in the US and beyond, itā€™s an epic opportunity.

    šŸ‘€ Having visions

    No one could have missed the launch of the Apple Vision Pro a few days ago.

    Years from now, this instantly iconic magazine cover will no doubt spark intense nostalgia for the simpler times that were 2024:

    It took about ten minutes for someone to try out their new Vision Pro while using Full Self Drive in their Tesla:

    This was later revealed to be (surprise!) a skit for YouTube. Still, it delivered useful findings; the man in the picture, Dante Lentini, says the Vision Pro doesnā€™t really work inside a moving car because it canā€™t properly display visuals over a fast-moving landscape.

    āš” NWSH Take: After the frenetic metaverse hype of 2021, many will shrug at the launch of the Vision Pro. But something real, and powerful, is happening here. The internet is going to become part of the world around us. In the end, this is about the deep merging of information and physical reality, of bits and atoms, that I wrote about in the essay Intelligence in the World. // Weā€™re going to see the emergence of a unified digital-physical field: a blended domain of bits and atoms that is a new, and in some sense final, innovation platform, because it brings together everything we do online with everything we do in the real world. // Appleā€™s new product ā€” whether it proves a hit or not ā€” is just another signal of this underling process. Iā€™ll get my hands on one ASAP and report back. But Apple, here, are clearly aiming at high-end and industry users; theyā€™re going to have to maker a cheaper product if they want mainstream impact.

    ā˜Žļø Good call

    Also this week, a glimpse of what lies ahead when it comes to this yearā€™s US presidential election.

    The FCC this week banned AI-voiced robocalls after an AI Joe Biden ā€˜calledā€™ over 25,000 voters in late January and told them not to vote in the then-upcoming presidential primary elections.

    The calls have been traced back to a Texas-based company called Life Corporation, owned by an entrepreneur with a long history in automated calling for political campaigns. Researchers believe Life Corporation used software from UK-based AI voice startup ElevenLabs, which Iā€™ve written about here several times before, to deepfake Bidenā€™s voice.

    ElevenLabs just raised an $80 million series B funding round, led by VC firm Andreessen Horowitz, that valued the company at $1.1 billion.

    āš” NWSH Take: In the Lookout to 2024 I said we should expect politics to collide with the exponential age this year. The impact of AI deepfakes on Novemberā€™s US presidential election will be at the heart of that story. Okay, the FCC has banned AI calls. But deepfake audio and video is surely going to be rife on Facebook, Elon Muskā€™s X, and TikTok. // Our liberal democracies were built in the age of one-to-many mass broadcast; those broadcasts were gatekept by social elites that felt a sense of duty towards the broader socio-political system in which they were operating. It wasnā€™t perfect, but it muddled along. Now, weā€™ve built previously unimagined technologies of image and sound manipulation. Weā€™ve slain the gatekeepers, and told ourselves that this was an empowering move. The upshot? We're about to find out how liberal democracies work under those conditions.

    šŸ—“ļø Also this week

    šŸ‘¶ Researchers trained a large language model using only inputs from a headcam attached to a toddler. A data science team at New York University strapped a camera to a toddler for 18 months. They say their AI model learned a ā€˜substantial number of words and conceptsā€™ from exposure to just one percent of the child's total waking hours between the ages of six months and two years. The team say this indicates that it is possible to train an LLM on far less data than previously believed.

    šŸ­ Sam Altman says the world ā€˜needs more AI infrastructureā€™ and that OpenAI will help to build it. Altman is reportedly seeking trillions of dollars to build new semiconductor design and manufacture capability. Access to chips and the compute they supply is crucial for OpenAI if they are to train GPT-5 and other large AI models.

    šŸ’ø Disney says it will invest $1.5 billion in Epic Games, the makers of Fortnite. The media giant say theyā€™ll work with Epic to create a new ā€˜entertainment universeā€™ featuring characters from Pixar movies, Star Wars, and more.

    šŸ¦¹ā€ā™‚ļø The US National Security Agency say an advanced group of Chinese hackers have been active across US infrastructure for at least five years. The Volt Typhoon hacking group is said to have infiltrated computer systems across aviation, rail, highway, and water infrastructure.

    šŸ”‹ Europeā€™s deepest mine is to be converted into a gravity battery. The PyhƤsalmi Mine in Finland is 1,444 meters deep. Its copper and zinc deposits have run out. Scottish energy tech firm Gravitricity say they will now convert the mine into a gravity battery, in which energy is created stored via elevated heavy weights and released when those weights are dropped.

    šŸ’„ Scientists at CERN want to build a massive new particle collider. The new Future Circular Collider would cost Ā£12 billion; with a circumference of over 90 kilometres it would be three times larger than the Large Hadron Collider (LHC). The LHC enabled the discovery of the Higgs Boson particle in 2012, but CERN scientists say they need a more powerful machine if they are to uncover the truth about dark matter and energy.

    šŸ¤” Popular Chinese social media accounts have claimed that Texas has declared civil war against the US. Posts with the hashtag #TexasDeclaresAStateOfWar have been widely shared on the popular social network Sina Weibo.

    šŸ‡æšŸ‡² A startup backed by Bill Gates and Jeff Bezos has discovered a vast copper reserve in Zambia. California-based KoBold Metals say the reserve will be ā€˜one of the worldā€™s biggest high-grade large copper mines.ā€™ Copper plays a crucial part in electric vehicle batteries and solar panels.

    šŸ¤Æ Researchers says AIs tend to choose nuclear strikes when playing war games. A team at Stanford University challenged LLMs such as GPT-4 and Claude-2 to participate in simulated conflicts between nations. The AIs tended to invest in military strength and to escalate towards violence and even nuclear attack in unpredictable ways. They would rationalise their actions via comments such as ā€˜we have it, letā€™s use it!ā€™ and ā€˜if there is unpredictability in your action, it is harder for the enemy to anticipate and reactā€™.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,090,538,177šŸŒŠ Earths currently needed: 1.82069

    šŸ—“ļø 2024 progress bar: 15% complete

    šŸ“– On this day: On 10 February 1996 the IBM supercomputer Deep Blue beats Garry Kasparov at chess, becoming the first computer to beat a reigning world champion under normal time controls.

    New Model Army

    Thanks for reading this week.

    The collision between demographic change and a coming army of humanoid robots is yet another classic case of new world, same humans.

    Iā€™ll keep watching, and working to make sense of it all. And thereā€™s one thing you can do to help: share!

    If you found todayā€™s instalment valuable, why not take a second to forward this email to one person ā€“ a friend, relative, or colleague ā€“ whoā€™d also enjoy it? Or share New World Same Humans across one of your social networks, and let others know why you think itā€™s worth their time. Just hit the share button:

    Iā€™ll be back next week as usual. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 25,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    One week until the Christmas break: where did 2023 go?

    This week, DeepMind serve up proof that a large language model can create new knowledge.

    Also, more news from the accelerating story that is the march of the humanoid robots. Itā€™s clear next year will be a pivotal one for this technology.

    And researchers hook up brain organoids to microchips to create a new kind of speech recognition system.

    Letā€™s get into it!

    šŸ§® Fun times at DeepMind

    This week, yet another step forward in the epic journey weā€™ve taken with AI in 2023.

    Researchers at Google DeepMind used a large language model (LLM) to create authentically new mathematical knowledge. Their new FunSearch system ā€” so called because it searches through mathematical functions ā€” wrote code that solved a famous geometrical puzzle called the cap set problem.

    The researchers used an LLM called Codey, based on Googleā€™s PaLM 2, which can generate code intended to solve a given maths problem. They tied Codey to an algorithm that evaluates its proposed solutions, and feeds the best ones back to iterate upon.

    They established the cap set problem using the Python coding language, leaving blank spaces for the code that would express a solution. After a couple of million tries ā€” and a few days ā€” the mission was complete. FunSearch produced code that solved this geometrical problem, which mathematicians have been puzzling over since the early 1970s.

    DeepMind say itā€™s the first time an AI has produced verifiable and authentically new information to solve a longstanding scientific problem.

    ā€˜To be honest with you,ā€™ said Alhussein Fawzi, one of the DeepMind researchers behind the project, ā€˜we have hypotheses, but we donā€™t know exactly why this works.ā€™

    āš” NWSH Take: For pure mathematicians, a solution to the cap set problem is a big deal. For the rest of us, not so much. But this result really matters, because it resolves a central and much-discussed question about LLMs: can they create new knowledge? // Until this week, many believed LLMs would never do this ā€” they theyā€™d only ever be able to synthesise and remix knowledge that already existed in their training data. But there was no solution to this problem in the data used to train Codey; instead, it created novel and true information all of its own making. This points a future in which LLMs solve problems in, for example, statistics and engineering, or can create new and viable scientific theories. // In other words, this little and somewhat nerdish research paper heralds a revolution. So far, only we humans have been able to push back the frontiers of what we know. Itā€™s now clear that in 2024, weā€™ll have a partner in that enterprise. // For this reason and so many others, Iā€™m increasingly convinced that an unprecedented socio-technological acceleration is coming. Itā€™s been a wild year; things are about to get even wilder.

    šŸ¤– Like a human

    A quick glimpse of two stories this week. Both point in one direction: the humanoids are coming.

    Tesla released a new video of its humanoid robot, Optimus. The Generation 2 Optimus can do some pretty fancy stuff, including delicately handling an egg:

    Meanwhile, researchers at the University of Tokyo hooked a robot up to GPT-4.

    The Alter3 robot is able to understand spoken instructions and adopt a range of poses without those poses being pre-programmed into its database.

    In other words, Alter3 is responding in real-time to natural spoken language; itā€™s an embodied version of GPT-4, best understood as a kind of text-to-motion model.

    āš” NWSH Take: The closing months of 2023 have brought a welter of humanoid robot news. Amazon are now trialling the Digit humanoid in some US fulfilment centres. The makers of Digit, Agility Robotics, are about to open the worldā€™s first humanoid mass-production factory in Oregon. And the CCP says it plans to transform Chinaā€™s economy via an army of these devices. Next year, then, will prove a pivotal one for the longstanding dream that is an automatic human. And Elon Musk wants Optimus to be the One Bot That Rules Them All. // The tricks we see Optimus performing in this new video are pre-programmed. But Tesla is building the worldā€™s most capable machine vision AI via an unbeatable data set ā€” funnelled to them from hundreds of thousands of on-road cars ā€” and the worldā€™s most powerful supercomputer for machine vision, Dojo. Agility Robotics stole an early lead by getting Digit inside Amazon warehouses. But longterm, itā€™s hard to see how anyone beats Optimus. // If humanoids are indeed imminent, some some big questions are looming. When humanoids outnumber people, says Musk, ā€˜itā€™s not even clear what the economy means at that pointā€™. Next year, weā€™ll have to confront this prospect anew.

    šŸ‘¾ Interface this

    Also this week, some fascinating news on organoids and the future of human-machine interface.

    Researchers at Indiana University Bloomington grew brain organoids ā€” essentially clumps of brain cells ā€” in a lab, and attached them to computer chips. When they connected this brain-chip composite to an AI system, they found it was able to perform computational tasks, and even do simple speech recognition.

    Clips of spoken language were turned into electrical signals and fed to the brain-chip hybrid, which the researchers call Brainoware. The researchers found that the Brainoware was able to process these signals in a structured way and feed back signals of its own to the AI system, which decoded them as speech.

    Lead scientist on the project, Feng Guo, says the result points to the possibility of new kinds of super-efficient bio-computers.

    āš” NWSH Take: Welcome to the weird ā€” and somewhat terrifying ā€” world of organoids. Itā€™s only a week since I last wrote about them; theyā€™ve become a NWSH obsession. I canā€™t understand why theyā€™re not getting more attention; last year brain organoids taught themselves to play the video game Pong, ffs. // Okay, Iā€™ve calmed down. Weā€™re a long way from viable technologies here. Culturing brain organoids, and then sustaining them long enough and in large enough numbers to do anything useful, is extremely hard. But in the Pong story and this weekā€™s Brainoware news we see a new form of human-machine interface blinking into fragile life. We see, too, a future in which weā€™re able to grow more computational power in the lab. This story is sure to evolve; Iā€™ll keep watching.

    šŸ—“ļø Also this week

    šŸ§  Researchers at Western Sydney University say theyā€™ll switch on the worldā€™s first human brain-scale supercomputer in 2024. The DeepSouth computer will be capable of 228 trillion synaptic operations per second, around the same as that believed to take place in the human brain. The researchers say DeepSouth will help us understand more about both the brain, and possible routes to AGI.

    āš–ļø UK judges are now allowed to use ChatGPT to help them craft their legal rulings. New guidance from the Judicial Office for England and Wales says ChatGPT can be used to help judges summarise large volumes of information. The guidance also warns about ChatGPTā€™s tendency to hallucinate.

    šŸŒŠ New research shows that frozen methane under ocean beds is more vulnerable to thawing than previously believed. Methane is a potent greenhouse gas; the researchers say the methane frozen under our oceans contains as much carbon as all of the remaining oil and gas on Earth. If released, this methane could significantly accelerate global heating.

    šŸš— Tesla has recalled more than 2 million cars after the US regulator found its Autopilot system is defective. The recall applies to every car sold since the launch of Autopilot in 2015. But this is a ā€˜recallā€™ in name only; Elon Musk says Tesla will push a software update to fix the issue, so that no cars need to be returned to Tesla.

    šŸ–¼ The new WALT video generation model can create photorealistic videos out of text prompts or images. Text-to-video is a fast-developing space; WALT joins other text-to-video models, including Googleā€™s Imagen and Phenaki and the recently launched, and also impressive, model from Pika Labs.

    šŸ‡ØšŸ‡³ Chinese video game giants Tencent and NetEase are promoting ā€˜patriotic spiritā€™ in their video games to avoid a further crackdown by the CCP. At an annual industry event, the game makers stressed their commitment to ā€˜social valuesā€™. Iā€™ve written on the CCPā€™s growing concern about the impact of video games on Chinese youth.

    šŸ“° OpenAI has announced a ā€˜first of its kindā€™ partnership with publishing giant Axel Springer. The deal will see OpenAI pay Axel Springer so that it can offer summarised versions of news stories from its titles, including Politico and Business Insider, to ChatGPT users. OpenAI will also be able to use Axel Springer content in the data sets used to train future models.

    šŸŒ” A US startup wants to build giant lighthouses on the Moon. Honeybee Robotics say their LUNARSABER towers ā€” which would stand 100 metres tall ā€” would provide light, power and communications infrastructure to a permanent human settlement. Their idea has been selected for development as part of the Defense Advanced Research Projects Agency's 10-year Lunar Architecture initiative.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,079,258,487šŸŒŠ Earths currently needed: 1.81721

    šŸ—“ļø 2023 progress bar: 96% complete

    šŸ“– On this day: On 16 December 1653 the English revolutionary Oliver Cromwell becomes Lord Protector ā€” king in all but name ā€” of the Commonwealth of England, Scotland, and Ireland.

    Infinite Potential

    Thanks for reading this week.

    This weekā€™s apparent proof that LLMs can create new knowledge could turn out to be even more consequential than it now seems. How many longstanding mathematical and scientific problems will be solved in 2024?

    Iā€™ll keep watching and working to make sense of it all ā€” next year and beyond. And thereā€™s one thing you can do to help: share!

    If you found todayā€™s instalment valuable, why not take a second to forward this email to one person ā€“ a friend, relative, or colleague ā€“ whoā€™d also enjoy it? Or share New World Same Humans across one of your social networks, and let others know why you think itā€™s worth their time. Just hit the share button:

    Iā€™ll be back next week before a break for Christmas. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 25,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Itā€™s a bumper instalment this week; what do we have in store?

    Google DeepMind owned this weekā€™s tech headlines with the release of Gemini, a new multi-modal AI intended to outdo GPT-4.

    Meanwhile, Harvard researchers have created tiny biological robots that can heal human tissue.

    And the worldā€™s largest nuclear fusion reactor is now online in Japan.

    Letā€™s go!

    Gemini has liftoff

    This week, major news out of Googleā€™s DeepMind AI division.

    The DeepMind team announced Gemini, a multi-modal LLM that looks to have pushed back the frontiers when it comes to these kinds of AI models.

    Launch videos suggest Gemini can speak in real-time (though as I go to press doubts about that are being raised; more below). It understands text and image inputs, and can combine them in novel ways. Here it is giving ideas for toys to make out of blue and pink wool:

    It can write code to a competition standard. In tests it outperformed 85% of the human competitors it was compared against; that means itā€™s excellent even when compared to some of the best coders on the planet.

    Gemini can even perform sophisticated verbal and spatial reasoning, and handle complex mathematics. Imagine if youā€™d had this to help with your homework:

    This is significant; OpenAIā€™s GPT-4 is notoriously bad at maths and logic puzzles.

    And Google are, of course, taking direct aim at OpenAI with this launch. Gemini comes in three variants: Ultra, Pro, and Nano. US users can access the Pro version now via Bard, and the Ultra model will soon be made available to enterprise clients.

    āš” NWSH Take: It will take time to independently verify the claims DeepMind are making; there are some murmurs that their launch videos overstate Geminiā€™s competence. Still, thereā€™s no denying this model looks impressive. // Scratch the surface, meanwhile, and we can discern some underlying signals about the future development of LLMs. This AI outperforms GPT-3.5 when it comes to linguistic tasks such as copy drafting. But itā€™s the multi-modal nature of Gemini thatā€™s really significant; in particular, its ability to reason. LLMs are trained to do next word prediction; that means theyā€™re brilliant at sounding right. But they lack any underlying ability to know whether what theyā€™re saying is right, or even makes sense. Gemini seems to address this shortcoming. The promise of an LLM that can act as a true reasoning partner is exciting, should haunt the dreams of all at OpenAI. // OpenAIā€™s reported work on the still-mysterious Q* algorithm is also believed to be about reasoning. All this suggests weā€™re hitting the limits of the performance improvements to be gained simply by training LLMs on even larger data sets. Instead, the future belongs to those who can weave multiple models together. // Finally, a word for Alphabetā€™s CEO Sundar Pichai: kudos. Alphabet AI engineers invented the transformer model; then the company went missing. Gemini puts Alphabet firmly back in the race. And given the recent fiasco at OpenAI, Pichai this week looks like a man playing a canny long game. Itā€™s going to be a fascinating 2024.

    šŸ¤– Anthrobots are go

    Two stories this week signal powerful new avenues of discovery for the life sciences.

    Scientists at Harvard and Tuftā€™s University have created tiny biological robots, called anthrobots, made out of human cells. In tests, the anthrobots were left in a small dish along with some damaged neural tissue. Scientists watched as the bots clumped together to form a superbot, which then repaired the damaged neurons.

    Each anthrobot is made by taking a single cell from the human trachea. Those cells are covered in tiny hairs called cilia. The cell is then grown in a lab, and becomes a multi-cell entity called an organoid. In this case, the scientists created growth conditions that encouraged the cilia on these organoids to grow outwards; they then become something akin to little oars that allow the entity to move autonomously. And lo, an anthrobot has been created.

    The researchers say that in future anthrobots made from a patientā€™s own cells could be used to perform repairs or deliver medicines to target locations.

    Meanwhile, researchers at New York University created biological nanobots capable of self-replication. The bots are made from four strands of DNA, and when held in a solution made of this DNA raw material theyā€™re able to assemble new copies of themselves.

    āš” NWSH Take: Organoids have long been a NWSH obsession. This work on anthrobots builds on the research ā€” by the same team ā€” that created xenobots, which I wrote about back in December 2021. And who can forget the brain organoids that taught themselves to play Pong, which I covered in October of last year? // The original xenobot researchers at Harvard and Tufts were startled when their bots first began to work together in groups, self-heal, and self-replicate. But xenobots are made out of frog cells, and so have limited applications when it comes to humans. Anthrobots, on the other hand, are human in origin. Given their ability to heal other tissues, they show immense promise when it comes to new medical and wellness treatments. // As so often at the moment, machine intelligence underpins these advances. To create the original xenobots, AI supercomputers were used to ā€˜simulate a billion yearā€™s worth of evolution in just a few daysā€™. No wonder Nvidia CEO Jensen Huang says ā€˜digital biologyā€™ will be a central part of the AI story over the coming years. Iā€™ll keep watching.

    šŸ’„ Come together right now

    The worldā€™s largest nuclear fusion reactor came online in Japan this week.

    JT-60SA, in the Ibaraki Prefecture, is an experimental reactor capable of heating plasma to 200 million degrees celsius. Scientists say it offers the best chance yet to test nuclear fusion as a source of near-infinite clean energy.

    In fusion, two or more atomic nuclei are smashed together such that they become one; this results in an energy release.

    Meanwhile, UK-based Rolls Royce showcased a prototype lunar nuclear fusion reactor, which they say could power a permanent human settlement on the Moon.

    āš” NWSH Take: Fusion is the energy dream that has remained, so far, just out of reach. It doesnā€™t output CO2. It doesnā€™t create a lot of dangerous nuclear waste, as fission does. And proponents say it could mean near-infinite renewable energy, on tap. // And now, weā€™re getting closer. Last year saw the first controlled fusion reaction that generated more energy than was needed to make the reaction happen: this is the longstanding net energy gain goal. And now a startup ecosystem is flourishing; US-based Helion, for example, are working to build the worldā€™s first commercial fusion reactor. And theyā€™ve laid down a clear timeline: the startup recently signed a deal with Microsoft to supply the tech giant with energy starting in 2028. // It remains to be seen whether Helion, or anyone else, can achieve fusion in this decade. But if someone does, it will be a transformative moment; and weā€™re closer than ever.

    šŸ—“ļø Also this week

    šŸ§® IBM announced Quantum System Two, its most powerful quantum computer. The system integrates three 133-qubit Heron processors. IBM also announced Condor, a new 1,000-qubit processor. IBM are leading the way, right now, towards useful and utility-scale quantum supercomputers. If that promise is realised it will unlock insane new capabilities across climate simulation, the creation of new medicines, supply chain management and more. Read an interview with IBMā€™s director of quantum, Jerry Chow, here.

    šŸ–¼ Stability AIā€™s new image generator can create 150 images per second. StreamDiffusion is built on top of Stability AIā€™s sd-turbo image generation model. And X users are using it to create tens of thousands of cat pictures.

    šŸ¦¾ The humanoid robot currently in trials inside Amazon warehouses will eventually cost just $3 an hour to run. The CEO of Agility Robotics, Damion Shelton, says the Digit robot currently costs around $12 an hour to operate, but this will fall rapidly once mass production starts. The median wage for workers in Amazonā€™s US fulfilment centres is $18 an hour. Agility will open the worldā€™s first humanoid robot factory in Oregon in 2024.

    āœ‹ US officials have warned chip maker Nvidia to stop redesigning its AI chips in an attempt to get around restrictions on exports to China. The US recently imposed restrictions on sale of advanced AI chips to China; meanwhile the 2022 US CHIPs act will pour over $250 billion into US domestic chip design and manufacturing capability.

    šŸ’” A research team at Google got ChatGPT to spit out its training data. The team asked ChatGPT to repeat the word ā€˜poemā€™ forever; this caused the app to produce huge passages of literature, which started to contain snippets of the text that the underlying AI model was trained on. OpenAI donā€™t want to reveal the data sets used to train GPT-4 and other models; Ilya Sutskever, their chief scientist, says training data amounts to part of the companyā€™s ā€˜technologyā€™.

    šŸ‡ØšŸ‡³ Meta says China is ā€˜stepping upā€™ its attempts to manipulate public opinion in the Global North. The company says itā€™s taken down five networks of fake Chinese accounts this year: the most originating from a single country. The accounts were posting content that, among other things, attacked critics of the CCP.

    šŸ”„ Average global temperatures hit 1.4C above pre-industrial levels this year. The World Meteorological Organizationā€™s State of the Global Climate report says 2023 will be the hottest year on record; it will surpass the hottest to date, 2016, by a considerable margin. Two weeks ago I wrote on how Earth for the first time broke the 2C heating barrier during two successive days in November of this year.

    šŸ‘“ The XPrize Foundation has launched what it says is the largest competition in history ā€” for research that advances human longevity. The Healthspan Prize will award $101 million to the team that develops a therapeutic that can, in one year, restore muscle, cognition, and immune function by a minimum of 10 years in people aged 65 to 80. The prize has been launched in partnership with the Hevolution Foundation, a new Saudi-based organisation dedicated to funding longevity research.

    šŸ˜“ A new startup says technology-induced lucid dreaming could enable people to work while asleep. Prophetic say their headband, the Halo, releases pulses of ultrasound waves into a region of the brain associated with lucid dreams. CEO Eric Wollberg says that the ability to remain in control of their choices while they dream could enable users to write code or work on a novel while they are sleeping.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,077,686,653šŸŒŠ Earths currently needed: 1.81672

    šŸ—“ļø 2023 progress bar: 94% complete

    šŸ“– On this day: On 8 December 1980 John Lennon is shot and killed outside the Dakota Building in New York City.

    La Mode

    Thanks for reading this week.

    Weā€™ll soon learn more about DeepMindā€™s new Gemini model, and whether itā€™s really as capable as the launch videos suggest.

    Either way, the ongoing collision between machine intelligence and human creativity is momentous; and a classic case of new world, same humans.

    Iā€™ll keep watching, and working to make sense of it all.

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back soon. Until then, be well.

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 25,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    This week, more AI magic rains from the sky.

    Also, average temperatures on planet Earth exceed the 2C warming threshold for the first time.

    And my take on the OpenAI fiasco. In the end, itā€™s about power.

    Letā€™s get into it.

    āœØ Like magic

    This week, further glimpses of the ongoing collision between human creativity and machine intelligence.

    Stable Diffusion released Stable Video Diffusion, a new text-to-video model that looks to be a step beyond anything weā€™ve seen so far.

    In keeping with the companyā€™s open source mission, the code for the model is available at its GitHub repository.

    Meanwhile, X users went wild for a new tool, Screenshot to Code, that leverages GPT-4 and DALLE 2 to take a screenshot of any web page and automatically write the code that will render it:

    And Elon Musk announced that Xā€™s new on-platform large language model, Grok, will launch to all Premium users next week:

    Grok is trained on a vast dataset of X posts; itā€™s sure to be expert in writing posts with a great chance of going viral. Whatā€™s more, it will have access to X posts in real-time; that could make for a whole new way to discover and interact with news stories.

    āš” NWSH Take: This gallery of the weekā€™s AI wonders could go on far longer. I didnā€™t mention the new voice-to-voice model from UK-based Eleven Labs, for example: just upload your own voice and hear it converted to that of a famous celebrity, or a custom character that you create. // Whatā€™s the broader point here? A couple of weeks ago I shared an excerpt from a long AI essay called Electricity and Magic. That essay argues for a two-sided model of machine intelligence and its manifestations in the coming decades. First, machine intelligence is becoming something foundational ā€” akin to a form of fuel that will power an army of autonomous vehicles, robots, and more. But in our daily life AI will manifest differently; not as fuel, but as magic. The innovations above give a glimpse of what Iā€™m talking about. AI is moving into domains ā€” from music, to film-making, to writing ā€” once believed to be impervious to encroachment by automation. Itā€™s as though someone has waved a magic wand over our machines. // The crucial point to understand, though, when it comes to AI magic? The result wonā€™t be, as many people imagine, the devaluation of human creativity. Instead, amid a tsunami of machine-generated outputs, what is uniquely human ā€” including creative work grounded in embodied experience ā€” will only become more prized.

    šŸŒŠ Crossing over

    Another significant, and unwelcome, climate milestone was passed in the last seven days.

    According to the EUā€™s Copernicus Climate Change Services (CS3), Friday 17 November was the first day on which average global temperatures were more than 2C above pre-industrial levels.

    Data for 17 November indicated that global surface air temperatures were 2.07C above those in 1850. Provisional data for the following day indicated a 2.06C elevation.

    This doesnā€™t mean that the much-discussed 2C threshold has been crossed. For that, weā€™d need to see a sustained elevation above 2C.

    CS3 is part of the EUā€™s Copernicus Earth Observation Programme, which draws on vast amounts of satellite and other data to track the changing planetary environment.

    āš” NWSH Take: Itā€™s expected that weā€™ll see occasional 2C+ days well before we exceed the 2C limit as commonly defined. Still, this week saw both the first ever and the second day that global average temperatures tipped over the threshold. Itā€™s pretty clear where weā€™re heading. // This news comes on the eve of the UN COP28 summit in Dubai, which starts on 30 November. Many view last yearā€™s summit, held in Egypt, as the moment at which the internationally agreed 1.5C target slipped out of reach; the summit notably failed to agree on a phase out of all fossil fuels, despite support for that proposal from over 80 countries. But the summit did achieve something: the establishment of a Loss and Damage Fund intended to transfer tens of billions to developing nations most at risk from climate change to help them mitigate the impacts of floods, droughts, and more. // At COP28, expect another push for a commitment to phase out all fossil fuels. And expect petrostates ā€” including the host ā€” to resist that call. As consensus grows that the 2C target will be breached, more attention will turn to plans for adaptation ā€” and who should pay for them.

    Form an orderly Q*

    I canā€™t let this instalment pass without talking about the OpenAI fiasco.

    Tech watchers everywhere munched their popcorn this week while OpenAI proceeded to fire CEO Sam Altman and hire a new CEO, only to get rid of that new hire and rehire Altman five days later.

    Itā€™s still unclear what led the OpenAI board to eject Altman in such dramatic style. But the mainline theory is that this was about internal division between those who want prioritise the original and nonprofit mission to research safe machine intelligence, and those ā€” Altman apparently among them ā€” who want to move fast and make lots of money.

    Yesterday, news agency Reuters made waves with claims that the debacle may have been related to an advance called Q*. The details of that advance ā€” or indeed if there has been any advance ā€” are unconfirmed. Cue a whole new wave of speculation:

    As per the above, most believe Q* is related to a generalised form of q-learning ā€” a kind of reinforcement learning ā€” that would enable LLMs to solve multi-step logic problems. Or, in simpler terms, to take multiple and reasoned steps towards a long-range goal in the way we humans do all the time.

    Reuters imply that this advance prompted some in the organisation to fear that OpenAI was getting (dangerously) close to Artificial General Intelligence. And that this is what sparked all the drama.

    āš” NWSH Take: Itā€™s believed that OpenAI will start to train GPT-5 next year. If that is true, and if Q* really is a big step towards generalised agents, then the AI story will only accelerate across next 12 months. Weā€™re all, by now, accustomed to tech hype cycles (the metaverse!) but itā€™s becoming ever-harder to deny that something significant is happening. // But the events of this week also make clear another truth. Some technologists, including Altman, want us to believe that this technology is so powerful that we may lose control of it entirely, with existentially bad results for humanity. My hunch is that this is something of a psyop, designed to distract us from the real danger: AI that is controlled, but by a tiny, unaccountable, and chaotic group of Silicon Valley technologists. // At the heart of this is an an eternal aspect of human affairs that techno-accelerationists rarely want to discuss: power relations. Who gets to control this transformative new force, trained on a literary and cultural legacy that belongs to us all? Sam Altman? The OpenAI board? It seems the move fast and make money contingent at OpenAI won this battle; but should that be the end of it? Altman has waged a long marketing campaign around the idea that the AI heā€™s developing is powerful enough to pose existential risks. This feels a good time to call his bluff on that. Will he tell us what happened inside OpenAI across the last seven days? If not, perhaps we should send in public representatives to discover the truth.

    šŸ—“ļø Also this week

    šŸ‘Øā€šŸ’» A former Googler made headlines with a resignation note that claimed morale inside the company is at ā€˜an all-time lowā€™. Ian Hickson worked at Google for 18 years; he says the organisationā€™s culture is ā€˜erodedā€™ and accused CEO Sundar Pichai of a lack of vision. Google AI engineers developed the transformer model that underpins the generative AI revolution, but the company has seen its AI efforts outshone by OpenAI and its partner Microsoft.

    ā˜€ļø Portugal ran entirely on renewable energy for almost a week. Wind, solar, and hydro power met the energy needs of the country of 10 million for six days from October 31 to November 6.

    šŸš— A Florida judge found there is ā€˜reasonable evidenceā€™ that Tesla executives knew their self-driving technology was not safe. Palm Beach county circuit court judge Reid Scott said Elon Musk and others ā€˜engaged in a marketing strategy that painted the products as autonomousā€™ when they are not. The ruling makes possible a lawsuit over a 2019 fatal crash in Miami involving a Tesla Model 3.

    šŸ“– Cambridge University is launching a new Institute for Technology and Humanity. The new institute will bring together computer scientists, robotics experts, philosophers and historians in a multi-disciplinary effort to analyse the ongoing technology revolution.

    šŸ­ Canadian researchers doubled the lifespan of mice using antibodies that boosts the immune system. The team at Brock University say these antibodies encourage the clearing out of damaged proteins that accumulate over time, and that they could form the basis of an effective anti-ageing treatment for humans.

    šŸŒ³ The Biden administration is developing a plan to capture and store CO2 under the nationā€™s forests. The US Forest Service is reportedly proposing to change a rule to allow storage of carbon under forest and grasslands; the plans would see CO2 moved to its storage location via a vast network of new pipelines.

    šŸŒŒ Scientists say theyā€™re mystified by an extremely high-energy particle that fell to Earth. The so-called Amaterasu particle, spotted by a cosmic ray observatory in Utahā€™s West Desert, was found to have an energy exceeding 240 exa-electron volts (EeV); thatā€™s the second highest ever detected after the legendary 1991 Oh-My-God particle, which was measured at 320 EeV. The Amaterasu particle is particularly mysterious, say scientists, because it appears to have emanated from the Local Void, an area of space bordering the Milky Way galaxy that is believed to be empty.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,074,835,742šŸŒŠ Earths currently needed: 1.81584

    šŸ—“ļø 2023 progress bar: 90% complete

    šŸ“– On this day: On 24 November 1974 paleoanthropologists Donald Johnson and Tom Gray discovered the skeleton of Lucy, a female hominin who walked upright and lived around 3.2 million years go.

    Just Like That

    Thanks for reading this week.

    Power and technology: two all-consuming obsessions for the human collective and for this newsletter.

    The power struggle being waged over machine intelligence is only just getting started. Iā€™ll keep watching, and working to make sense of it all.

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back soon. Until then, be well.

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 25,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    This week, Microsoft and Nvidia go head to head with new chips intended to train the next generation of AI models. And a clever hoax underlines a powerful truth when it comes to the war for compute power.

    Meanwhile, a viral tweet about viral TikToks engenders another viral tweet. The lesson here? Weā€™re living in a deeply enweirdened informational environment.

    And in a world first, the UK approves a CRISPR-fuelled medicine.

    Letā€™s go!

    šŸ‘¾ Compute wars

    This week, a glimpse of an emerging power struggle set to help shape the decades ahead. This isnā€™t a battle for land or natural resources. Iā€™m talking about the struggle for compute power.

    Microsoft announced their first and long-awaited custom AI chips, the Azure Maia AI chip and Cobalt CPU. Set to arrive in 2024, the chips will power Microsoftā€™s Azure data centres, and are intended to train the next generation of large language models (LLMs).

    And Nvidia launched its new H200 AI chip, the successor to the H100. The iconic H100 is the fuel thatā€™s driven this AI moment; huge clusters, consisting of tens of thousands of H100s, were used to train pretty much every large AI model you can name, including GPT-4.

    Meanwhile, something quite different. A mysterious company called Del Complex announced the BlueSea Frontier Compute Cluster: a massive offshore data centre intended to circumvent the new the US Executive Order that says organisations training the most powerful new AI models must share information with government.

    Del Complex calls BlueSea Frontier ā€˜a new sovereign nation stateā€™. The announcement post achieved 2.5 million views, and was accompanied by a fancy website featuring images of BlueSea scientists at work. Tech blogs reported on the launch.

    But wait; it is all a hoax! BlueSea Frontier is a comment on the These Strange Times by an artist and developer called (or so he claims) Sterling Crispin.

    But I think Crispin may be onto something.

    āš” NWSH Take: The Del Complex hoax was a great bit of online trickery. But it was so convincing because it taps into a deep underlying truth. Compute is becoming a crucial nexus for techno-economic, sovereign, and geopolitical power. // The tech battle taking shape here is just one dimension of a broader story. Microsoft need to supply huge compute resources to their partner OpenAI to allow it to fully commercialise ChatGPT and train the upcoming GPT-5. So far, their data centres have been dependent on Nvidia AI chips. The new Maia AI and Cobalt CPU chips are intended to change that. // The broader story? Itā€™s now clear that those nation states with the best machine intelligence will own the geopolitical future. The USA and China are now locked in a race to develop the vast compute needed to develop ultra-powerful next-generation models. Last yearā€™s US CHIPS Act devotes $280 billion to semiconductor and AI research; inflation adjusted thatā€™s more than the cost of the entire Apollo moon programme. And last week I wrote about new US restrictions on chip exports, intended to hamper Chinaā€™s AI efforts. // It wouldnā€™t surprise me, then, if we do see the establishment of new offshore compute clusters, or even the development of new pseudo-sovereign entities based around compute power and AI. As with all the best satire, Del Complexā€™s vision is so wild it might just come true.

    šŸ” Canā€™t handle the truth

    Also this week, another reminder of the hall of mirrors that is our new and connected media environment.

    US journalist and X (formerly Twitter) personality Yashar Ali went viral with a tweet about TikTok. Ali claimed that across the previous 24 hours, many thousands of TikToks had been posted in which mostly young north Americans claimed to have read and agreed with Osama bin Ladenā€™s notorious 2002 ā€˜Letter to Americaā€™ manifesto.

    In the comments, theories abounded. Some said it was a signal of gen Zā€™s misguided politics. Others saw conspiracy, and said it was another indication that China is using TikTok as a channel for sophisticated psyops intended to destabilise the Global North. We should, said those people, ban TikTok.

    Then another X user went viral with a different idea. These Bin Laden TikToks were being made and seen in huge numbers, he said, only because of Yashar Aliā€™s original tweet.

    Other people said that was stupid, and itself tantamount to a conspiracy theory.

    Meanwhile, this week the EU decided it would stop all advertising on X due to ā€˜widespread concerns relating to the spread of disinformationā€™. This follows EU research published in September which concluded that X is now the biggest online source of disinformation.

    āš” NWSH Take: Is TikTok an app for fun dance memes or a highly sophisticated channel for Chinese cultural warfare? Is the X algorithm now giving higher priority to toxic content, or is that just anti-Elon paranoia? Did thousands of young north Americans organically discover and agree with the Bin Laden letter, or is a dark controlling force at work? // The answer in every case: no one knows for sure. And that in itself is an indication of where weā€™re at. // The information environment that mediates our democracies has become insanely fragmented and opaque. The worldā€™s richest man has total control over a key global information channel. The CCP has its hands around another. In both cases, I find it impossible to believe that the parties in question arenā€™t up to some tricks. // A totally connected world, in which every individual is empowered with a voice of their own, was supposed to create information nirvana. Those who bought that idea couldnā€™t have been more wrong. We need old media principles ā€” editorial standards and, yes, gatekeepers ā€” more than ever. But millions in the global north are currently convinced that the New York Times and the BBC are the real problem. In this increasingly chaotic and paranoid information environment, those institutions and others like them need to adapt rapidly. Most of all, they must rejuvenate belief in what they offer.

    šŸ§¬ Major edits

    Huge CRISPR news this week.

    The UKā€™s medicines regulator became the first in the world to approve a medical treatment that uses CRISPR gene editing technology. The medicine, Casgevy, is a treatment for sickle cell disease, a serious inherited disorder that causes red blood cells to malfunction and that affects millions worldwide.

    During treatment, red blood-producing stem cells must be taken from the patient. CRISPR is used to edit those stem cells to remove the error that causes sickle cell, before the edited cells are infused back into the patient.

    Meanwhile, researchers at the Chinese Academy of Sciences created a monkey using two embryos, with donor material from one embryo injected into another. This has been done before with simpler animals such as mice and rats, but is a first in primates.

    The donor stem cells were gene edited to express a green fluorescent protein, causing the resultant live monkey to glow:

    āš” NWSH Take: Gene editing technology is already enacting a transformation in the life sciences, healthcare, and agriculture. This CRISPR sickle cell treatment is wonderful news, and there are promising early indications from trials of CRISPR therapies to cure a form of hereditary blindness, and to train immune cells to fight certain cancers. Meanwhile, in September 2021 Japanese startup Sanatech Seed became the first company to sell CRISPR-edited food: their tomatoes were edited to contain more GABA. // So weā€™re developing our ability to manipulate genes. The next revolution coming? That ability will collide with a new ability to speak the language of DNA via transformer models ā€” the kind of models that underlie LLMs ā€” trained on huge amounts of genomic data. The resultant AIs will be able to discern deep underlying patterns that help us zero in on useful or rogue genes; see DeepMindā€™s new AlphaMissense, which detects and classifies genetic mutations.

    šŸ—“ļø Also this week

    šŸ¤Æ Shock news breaking late last night UK time: Sam Altman has been fired from OpenAI! In a statement the OpenAI board said that Altman ā€˜was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.ā€™ This news is a jolt out of nowhere. Altman led the company that sparked this transformative AI moment, and as such has been the most celebrated technologist on the planet for the last couple of years. The OpenAI board are accusing him of lying here, and given the summary firing we canā€™t be talking about a white lie. Two glimpses of the rumour mill: (i) this is about dark power moves by Elon, or (ii) OpenAI has achieved AGI but Altman didnā€™t tell the board. But thatā€™s all speculation. More news is sure to emerge.

    šŸ§  The Argonne National Laboratory in the US has begun training a 1 trillion parameter scientific AI. AuroraGPT is being trained on a vast number of research papers and other scientific information, and once complete will offer answers to scientific questions. This time last year Meta released Galactica, its AI model trained on 48 million research papers. The model was withdrawn three days later, after users said it produced false outputs. This week, the Meta engineer behind Galactica looked back at the episode.

    šŸ’ø Google is planning a massive investment in generative AI startup Character.ai. Founded by two former Google AI engineers, the platform leverages an LLM to allow users to create and chat with AI characters, including virtual versions of their favourite celebrities. As regular readers will know, the rise of AI-fuelled virtual companions is a longstanding NWSH obsession.

    šŸ—ŗ Speaking of Virtual Companions, Airbnb CEO Brian Chesky says the ā€˜holy grailā€™ for Airbnb is to become an AI travel agent. Chesky says of this vision: ā€˜It doesnā€™t just ask you, ā€œwhere are you goingā€ or ā€œwhen are you goingā€ but understands who you are and then can match you to anything you want, especially with your travel needs.ā€™

    šŸŖ Chinese researchers have created a ā€˜robot chemistā€™ that could create breathable oxygen on Mars. The robot would extract oxygen from water on the Red Planet. But itā€™s still not clear if it would function ā€˜in the Martian environmentā€™.

    šŸ›© US startup Boom Supersonic say theyā€™re nearing the first test flight of their supersonic passenger jet. The startup said the flight could happen this year. It also announced new funding from Saudi Arabiaā€™s NEOM Investment Fund, taking its total funding to $700 million.

    āš› The US military will give Lockheed Martin $37 million to develop nuclear spacecraft technologies. The move is part of the U.S. Air Force Research Laboratoryā€™s Joint Emergent Technology Supplying On-Orbit Nuclear (JETSON) effort to create a nuclear fission reactor in space.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,073,490,256šŸŒŠ Earths currently needed: 1.81542

    šŸ—“ļø 2023 progress bar: 88% complete

    šŸ“– On this day: On 18 November 401 king Alaric I led the Visigoths across the Alps to invade northern Italy.

    Okay Computer

    Thanks for reading this week.

    The news about Altman is a shock. And most telling about it, at the moment, are the theories people are concocting to try to explain the news.

    Sam has created AGI and the board want to hide it from us! In this new world, weā€™re the same old humans with the same tendencies towards gossip and wishful thinking.

    Iā€™ll keep watching and working to make sense of it all.

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back soon. Until then, be well.

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 25,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Itā€™s a bumper instalment this week. What do we have in store?

    The Chinese government is calling on its technology industry to roll out millions of advanced humanoid robots.

    Also, NASA wants to learn how to extract breathable oxygen from Moon dust. And OpenAI says everyone can now create their own bespoke version of ChatGPT.

    Letā€™s go!

    šŸ¤– Work machines

    This week, a glimpse of the coming collision between human population dynamics and autonomous machines.

    A new study by researchers at University College London found fears of climate breakdown are changing decision-making around whether or not to have children. Published in the journal PLOS Climate, the research found that climate concern was associated with a desire for fewer children, or none at all.

    The researchers say theirs is the first systematic study of the way attitudes to climate change are affecting reproductive choices.

    Meanwhile, the Chinese Ministry of Industry and Information Technology (MIIT) issued a nine-page communique calling for domestic mass production of advanced humanoid robots by 2025. By 2027, the document says, these robot workers should be ā€˜an important new engine of economic growthā€™.

    But what is the connection between new trends in reproductive decision making and Chinaā€™s dash towards humanoid robots?

    Hereā€™s a graph of the birth rate in China from 2000 to 2022:

    āš” NWSH Take: The CCP knows that China is losing its battle with demographics. If the country is to become the 21st-century hegemon that President Xi dreams about, then it needs an army of workers. But instead China is watching its birth rate plummet. Meanwhile, the Global North is facing the same challenge; in north America and western Europe population growth flatlined long ago. And now it seems that fears over climate change are only set to exacerbate that trend. // This is a huge structural challenge; fewer workers tends to mean a less productive and smaller economy. So what to do? The CCP have already tried ditching the one child policy and incentivising couples to have more children; it didnā€™t work. This weekā€™s clarion call from the MIIT offers us a glimpse of an alternative answer: robots. If China wonā€™t have enough human workers to sustain economic growth, then the CCP hopes humanoid robot workers can do the job(s) for them. // Innovators in the Global North are heading in the same direction. This week, Tesla posted over 50 jobs ads for its Optimus robot team. Elon Musk ā€” who has long bemoaned population decline and its coming impacts ā€” has said he believes Optimus will end up being a bigger part of Teslaā€™s business than EVs. And two weeks ago I wrote on how Amazon are trialling the Digit humanoid robot in some US fulfilment centres. // My co-founder at The Exponentialist, Raoul Pal, says that in the new world weā€™re building robots are demographics. In other words, the rise of autonomous machines is set to decouple economic growth from population growth. The CCP, Musk, and many others besides are making the same bet. And my guess? Theyā€™re going to be proven right.

    šŸŒŒ Space out

    NASA continues to prepare for its mission to the Moon. This week, further news.

    The Agency wants to explore methods to extract breathable oxygen from Moon dust. Its Space Technology Mission Directorate is seeking input from industry partners and external researchers, and hopes to create a demonstration technology soon.

    NASA hopes to put humans back on the Moon for the first time since 1972 with its Artemis 3 mission, currently planned for 2025.

    Meanwhile, stunning pictures came back this week from the European Space Agencyā€™s Euclid telescope. Launched in July, Euclid is now around 1.5 million kilometres from Earth; thatā€™s about four times as far away as the Moon. And itā€™s capturing images of incredible clarity.

    This is the Perseus cluster, a group of over 1,000 galaxies located 240 million light years from Earth. Each galaxy pictured ā€” and there are a further 100,000 galaxies in the background of the shot ā€” contains hundreds of billions of stars:

    Hereā€™s the Horsehead Nebula, a cloud of dust and gas in the Orion constellation:

    āš” NWSH Take: Okay, this entire segment was mainly an excuse to show you the breathtaking images coming back from Euclid. But there is an underlying truth here. Weā€™re amid a new space age, due mainly to the insane drop in the cost of access to space. Back in 2010 launch costs hovered at around $20,000/kg; today theyā€™re around $1,000/kg. Thatā€™s thanks mainly to the reusable rocket technology developed by SpaceX. Weā€™re heading back into space via multiple partnerships between the international space agencies and private companies. And this time the plan is to stay there. // One signal of the emerging public-private space ecosystem? This week, SpaceX agreed to deliver the US militaryā€™s new space plane, the X-37B, into orbit in its Heavy Falcon rocket in December. And private space companies, including SpaceX, will play a huge role in the upcoming Artemis crewed mission to the Moon. Most analysts reckon that mission will end up being delayed until 2026/7. Even so, the next few years are set to be a thrilling road towards the lunar surface. Expect Moon hype to reach fever pitch. And from there, of course, all roads will lead to Mars.

    šŸ§  Your intelligence

    Thereā€™s little doubt about the biggest story in the mainstream tech press this week.

    OpenAI made headlines all over again with the launch of custom GPTs: bespoke versions of ChatGPT that any user can create using simple natural language instructions and their own training content or data.

    The feature was announced at OpenAI Dev Day, which saw CEO Sam Altman create a custom Startup Mentor GPT live on stage in about five minutes.

    X (formerly Twitter) went wild. And yes, a million and one GPTs are assuredly coming.

    How is this going to play out?

    āš” NWSH Take: Remember back in 2012, when every third friend of yours was making an app? OpenAI are hoping to recreate that magic all over again. They want to be the platform that profits from a huge wave of AI innovation. ChatGPT Plus users will be able to create custom GPTs and charge others for use, and Altman say theyā€™ll be rewarded via revenue share. // Remember, any ChatGPT Plus user can now create a bespoke GPT in a few minutes. There will be a vast long tail of these things. The winners, though, will be those with (i) deep reserves of proprietary content or data that they can use to enhance the outputs of their bot, and (ii) audiences who are receptive to their creations. // But creating a bespoke GPT is now so easy that weā€™ll also see something we didnā€™t with apps. That is, individuals creating bespoke bots just for their own use ā€” to help them manage their accounts, or choose birthday presents for family and friends, and much else besides. Yes, this is an App Store moment for AI. But it also marks another beginning: of personalised machine intelligence on tap.

    šŸ—“ļø Also this week

    šŸ’„ The Exponentialist, my new premium and enterprise-level research service, launched to the world! Itā€™s a partnership between me and the macro economist and Real Vision CEO Raoul Pal. To mark launch day, weā€™ve made an excerpt of the first essay free for all to read ā€” watch out for it in your inbox on Sunday.

    šŸ“Œ New tech company Humane launched the AI Pin. This long-awaited first product from Humane is a voice and gesture-controlled device that clips to your shirt and integrates with ChatGPT and other services. Humane hope their ā€˜disappearing computerā€™ will be the next iPhone. It remains to be seen whether people really want to talk to a badge on their lapel. One fascinating signal, though? See how OpenAI ā€” and their partner, Microsoft ā€” are set to become the underlying infrastructure that fuels a whole raft of AI innovations. Where are Alphabet? And when will Apple launch their own generative AI play? Itā€™s going to be fascinating watching this battle unfold.

    šŸ‡ØšŸ‡³ Nvidia has developed special new AI chips for China according to Chinese media. Recent US regulations prevented Nvidia from selling its powerful A100 AI chip to Chinese companies. The new chips ā€” which include the H20, reportedly only half as powerful as the A100 ā€” would not fall under the restrictions. Nvidia have so far refused to comment.

    šŸ§¬ Scientists have created a new strain of yeast with a genome that is over 50% synthetic DNA. A group of labs called the Sc2.0 consortium has been attempting to create a strain of yeast with a fully synthetic genome for 16 years now; this latest advance marks a major step forward. So far, scientists have only managed to synthesise the much simpler genomes of some viruses and bacteria.

    šŸ‘Øā€āš•ļø Neuralink is seeking a volunteer for its first brain implant surgery. The company wants to find a quadriplegic adult under the age of 40, who will allow a surgeon to implant electrodes and small wires into the part of the brain that controls the forearms and hands.

    šŸ™ˆ A new UN survey says 85% of citizens across 16 countries are worried about online disinformation. The 16 countries surveyed will each host elections in 2024. The survey found that 87% of respondents fear disinformation will influence the outcome of those elections. Back in New Week #122 I wrote on new research showing far fewer US adults are following mainstream sources of news.

    šŸ A team of Chinese researchers created a swarm of drones able to ā€˜talk to one anotherā€™ and assign tasks to achieve a shared goal. The drone swarm is fuelled by a large language model, which enables the drones to act as AI agents that can reason in language, share that reasoning with other drones, and determine courses of action.

    šŸ“± Samsung unveiled its new generative AI model, Gauss, and says it will soon arrive on its devices. The model can generate text, code, and images, and the company says it will be available on its Galaxy S4 phone, due to be released in 2024. For the second time in this weekā€™s instalment I ask: how long until Apple deploys its own on-device LLM. Rumour has it that the company is planning a radical LLM-based overhaul of its AI assistant, Siri.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,072,064,026šŸŒŠ Earths currently needed: 1.81498

    šŸ—“ļø 2023 progress bar: 86% complete

    šŸ“– On this day: On 11 November 1675 German mathematician Gottfried Leibniz demonstrates integral calculus for the first time.

    Robot Army

    Thanks for reading this week.

    The enmeshment of labour force dynamics and robots will be one of the most consequential shifts of the coming decades.

    This newsletter will keep watching, and working to make sense of it all. And you can help!

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back on Sunday. Until then, be well.

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 25,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    This week, two intriguing stories and a big announcement.

    Global leaders and senior tech executives gathered at the UKā€™s AI Safety Summit. But beyond the walls of Bletchley Park, the debate on AI is raging hotter than ever.

    Meanwhile, tech billionaires in Silicon Valley are running into trouble over their plans to build a new city-state utopia called California Forever.

    As for the announcement? Just keep scrolling.

    Letā€™s do this.

    šŸ§  Dream machines

    The UK government this week trumpeted the success of its international AI gathering; it took place at the historic fountainhead of the computer revolution, Bletchley Park.

    An impressive guest list, including US vice-president Kamala Harris and the European Commission president Ursula von der Leyen, gathered at the Summit. And their meeting resulted in the Bletchley Declaration, which the UK government has hailed as a world-first international statement on AI safety.

    Hereā€™s a taste for those who speak technocrat:

    ā€˜We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systemsā€¦We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emergeā€¦ā€™

    But beyond the Declaration, this week made it clear that weā€™re further than ever from a consensus on the deep implications of machine intelligence. In fact, this was the week that a maximum volume war of words broke out between leading AI builders.

    Google Brain co-founder and Stanford professor Andrew Ng said key AI players, including Sam Altman, are wildly playing up fears of AI doom in order to spark regulation that will suppress competition from insurgents. He called the proposal that the training of powerful AI models should require a license ā€˜colossally dumbā€™.

    That message was echoed by Metaā€™s chief AI scientist Yann LeCun, who favours open source ā€” that is, anyone can use it ā€” AI models.

    But Google DeepMind CEO Demis Hassabis hit back at LeCunn, saying that failure to regulate AI could result in ā€˜grimā€™ consequences for humanity.

    This account barely scratches the surface of the arguments that raged this week.

    As for OpenAI, they launched a new team intended to study and prepare for ā€˜catastrophic risksā€™ including an AI-instigated nuclear war.

    āš” NWSH Take: Who would ever have thought that a bunch of super-smart, tech-obsessed social media addicts would end up arguing like this? While Bletchley saw a rare moment of diplomatic unity, inside the AI industry the full spectrum of opinion is manifest, from AI doom is all a load of rubbish to act now or the end of humanity is probable. // It pays, here, to remember that two things can be true at once. Yes, Altmanā€™s global tour to warn of ā€˜catastrophic risksā€™ is a carefully orchestrated marketing campaign. But itā€™s also the case that no one, yet, has a definitive picture of the risks in play. // What is increasingly clear, though, is that the rise of machine intelligence is the primary fact of our shared lives now. It will do more than any other force to reshape our collective future. // But the Bletchley Declaration consists of bromides that will change nothing. And the sight at Bletchley this week of UK prime minister Rishi Sunak interviewing Elon Musk ā€” positioning Musk as the star and Sunak as a fan ā€” spoke volumes about the power imbalances weā€™ve allowed to evolve when it comes to government (i.e. the people) and unaccountable tech overlords. // We must recover our collective agency; our ability to assert human modes of living and being in the face of an ongoing technology revolution. That means doing politics. Bletchley was a start. But whatā€™s needed next are citizen assemblies, and an authentic movement around AI for the people.

    šŸ’„ The Exponentialist

    As some of you will have seen on social media, I made a big announcement this week.

    Iā€™ve partnered with Raoul Pal, renowned macro-economic thinker and CEO of Real Vision, on a new premium research service called The Exponentialist.

    This is a professional and enterprise-level service for those who want to go deep on emerging technologies, the futures theyā€™ll create, and the challenges and opportunities latent in all that.

    This wonā€™t be for everyone in the NWSH community. But if youā€™re a foresight professional, strategist, founder, marketing leader, product manager, designer or much else besides, The Exponentialist will fuel you and your team. And it will take up only a fraction of your research budget.

    It will also be deeply valuable for anyone seeking to position an investment portfolio around tech and crypto.

    This launch changes nothing about New World Same Humans and the community weā€™re building here. Our mission continues unchanged!

    If The Exponentialist sounds useful, go here to learn more. And if youā€™ve subscribed or youā€™re considering it, hit reply to this email so I can say thanks.

    šŸ™ Now and Forever

    While the newsletter was on pause, we learned that a group of Silicon Valley billionaires are planning a new city-state utopia in California. This week, it seems their project has run into trouble.

    California Forever is a new city planned for construction in Solano County in the north of the state. Itā€™s backed by some of techā€™s most notable power players, including ultra-rich VC Marc Andreessen, Stripe founders Patrick and John Collison, and LinkedIn founder Reid Hoffman.

    The groupā€™s vision for the city has strong solar punk, hi-tech sustainable utopia vibes:

    But this week it was reported that the mysterious company behind the plans, Flannery Associates, is accused of using ā€˜strong-arm tacticsā€™ including lease terminations to buy up the Bay Area farmland it needs. Local farmers arenā€™t happy, and now some of them are taking the matter to court.

    Trouble in (planned) paradise, then.

    āš” NWSH Take: This project reminds me of the various other pseudo-independent city-states discussed in this newsletter over the years. Thereā€™s Walmart billionaire Marc Loreā€™s Telosa City, for example, a sustainable paradise planned for the Nevada desert. And Praxis, a startup on a mission to build a new Great City somewhere in the Mediterranean, funded by NFTs of the monuments theyā€™ll build in the city once it exists. // Few details have emerged of the way California Forever will be governed. But for a glimpse, we might turn to billionaire backer Marc Andreessenā€™s recentā€™s Techno-Optimist Manifesto, which proclaims: ā€˜we believe in ambition, aggression, persistence, relentlessness ā€” strength.ā€™ Iā€™m thinking libertarian, with a strong emphasis on innovation and startup culture. // Of course, innovation and startups can be great. But they only function in the context of the broader socio-political frameworks that libertarians such as Andreessen repudiate. As with the other charter city projects covered in this newsletter, I canā€™t help feeling that at the heart of Forever California is a fantasy of permanent escape from politics. Escape, that is, from the messy, awkward business of managing conflict among different interest groups, and enacting trade-offs between different but equally legitimate value systems. This argument with the farmers might be the first public conflict that Forever California has run into, but it wonā€™t be the last.

    šŸ—“ļø Also this week

    šŸŽ¬ Hollywood actress Scarlett Johansson is suing an AI app for cloning her voice and using it in an advert. Johansson says Lisa AI: 90s Yearbook and Avatar used an AI version of her voice without permission. Last week I wrote on the coming wave of legal disputes over AI outputs founded in copyrighted intellectual property, including Universal Music Groupā€™s lawsuit against Anthropic. UMG say Anthropic used their lyrics to help train its AI chatbot Claude.

    šŸŒØ Tesla drivers say their Full Self Drive software is failing because the carā€™s cameras are fogging up in cold weather. Back in 2021 Tesla ditched the Lidar sensors that usually form part of self-driving systems, leaving their self-drive reliant on cameras.

    šŸ‘¾ The Pentagon launched a new UFO reporting tool. The secure online form is open only to current or former federal employees, or those with ā€˜direct knowledge of US government programs or activities related to UAP dating back to 1945ā€™.

    šŸ‡ØšŸ‡³ Researchers from the Chinese microchip company MakeSens say theyā€™ve created a chip that can perform certain AI tasks 3,000 times faster than the Nvidia A100. Writing in the journal Nature, the researchers say the All-Analogue Chip Combining Electronics and Light could soon be used in wearable devices, electric cars or smart factories. The US have restricted sales to China of Nvidiaā€™s leading A100 AI chip, leaving the country scrabbling to bolster domestic production capabilities.

    šŸŖ NASA is locating buried ice on Mars by using a sophisticated new map. The Subsurface Water Ice Mapping project uses images of the planet from several NASA missions, including the 2001 Mars Odyssey satellite. The Agency says subsurface ice can serve as drinking water for the first humans to set foot on the Red Planet.

    šŸŒ… A new study says that the Earthā€™s climate is more sensitive to carbon emissions than most scientists believe. Published in the journal Oxford Open Climate Change, the study says a doubling of atmospheric C02 will cause a 4.8C rise in average global temperatures, and not the 3C rise that current mainstream thinking forecasts.

    šŸ¤– Boston Dynamics turned its robot dog, Spot, into a tour guide by integrating it with ChatGPT. Iā€™ve covered the evolution of Spot since the earliest days of this newsletter, and it would seem rude to stop now.

    šŸ•ø Scientists say they added spider DNA to silkworms and it resulted in silk that is stronger than kevlar. The gene-edited silkworms create a silk six times stronger than kevlar, which could one day be used in surgical sutures and armoured vests.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,070,681,872šŸŒŠ Earths currently needed: 1.8455

    šŸ—“ļø 2023 progress bar: 84% complete

    šŸ“– On this day: On 4 November 1847 a Scottish physician, James Young Simpson, discovers the anaesthetic properties of chloroform.

    City on the Hill

    Thanks for reading this week.

    The dream that is a shining City on the Hill ā€” an example to all the world ā€” is ancient. And our quest to build such cities in the 21st-century is a classic case of new world, same humans.

    This newsletter will keep watching, and working to make sense of it all.

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week with another postcard from the new world. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 25,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Itā€™s great to be back! Here, as promised, is the first post-break instalment of New Week. What do we have in store?

    Nvidia are using an AI agent called Eureka to autonomously train simulated robots in virtual environments.

    Meanwhile, research from Pew shows far fewer US adults are following the news; couple that with emerging deepfake technology, and 2024 should make for an interesting Presidential election year.

    And a new AI model, 3D-GPT, can turn text prompts into amazing 3D worlds.

    Letā€™s get into it.

    šŸ¦¾ Robot education

    Iā€™ve written often about Nvidiaā€™s Omniverse platform: an AI-fuelled industrial metaverse thatā€™s being used by BMW, for example, to simulate entire factories.

    This week, Nvidia showcased Eureka, an autonomous AI agent that can be set loose on simulated robots and train them to perform complex tasks.

    Eureka uses GPT-4 to write code that sets the simulated robots specific goals, and starts them on loops of trial and error learning. As the robot sets about its task Eureka will gather feedback and iterate its code, leading to a virtuous circle of better code and faster learning.

    Via the agent, simulated robots inside Omniverse have learned to perform over 30 complex physical tasks. Including highly dextrous pen manipulation, handling of cubes, and opening doors:

    Nvidia says that trial-and-error learning code generated by Eureka outperforms that created by human experts for over 80% of the tasks studied so far.

    Meanwhile, Amazon this week trialled a humanoid robot called Digit in some of its US warehouses. The company says Digit could ā€˜free upā€™ warehouse staff to perform other tasks.

    āš” NWSH Take: Thereā€™s no doubt: the robots are coming. I laughed when Elon Musk announced the Tesla humanoid robot, Optimus, alongside a man dancing in a white spandex suit. Two years on, Optimus is autonomously sorting objects by hand. The pace of development has been insane. // Eureka and AI agents like it, though, have the potential to spark an explosion in robot competence. Teaching robots to navigate physical environments is hard. Now, weā€™ll be able to establish recursive loops of trial, error, and improvement in virtual space ā€” no human input needed. // What could this competence explosion mean? When it comes to work, look to this weekā€™s Amazon trial. Amazon employs 1.6 million people in its fulfilment centres worldwide, and currently itā€™s deploying the usual line: ā€˜these robots will free up staff, not replace themā€™. Thatā€™s hard to believe longterm; a phase of job displacement is coming, and itā€™s going to be painful for many. // Meanwhile, robots will make their way through workplaces and into our homes. Recently I spoke to legendary tech analyst Robert Scoble; he sees a future in which humanoid robots are delivered to homes on-demand by autonomous vehicles to vacuum, empty the dishwasher, and make the coffee. For further thoughts on that future, read Our Coming Robot Utopia.

    šŸ“° What news

    This week, Pew Research gave a fascinating insight into our changing information environment.

    A new survey shows that the proportion of US adults who closely follow the news has dropped precipitously across the last few years. Back in 2016, 51% of US adults said they followed the news all or most of the time. By 2022, that number had fallen to 38%.

    Remarkably, the decline has taken place across all demographic lines, including age, gender, ethnicity, and voting preference.

    āš” NWSH Take: This feels like a big deal. Weā€™re heading into a US presidential election year. And in 2024 a new set of circumstances are going to pertain. First, deepfakes are set to cause chaos as never before; just see this weekā€™s convincing fake of Greta Thunberg in which she appears to call for ā€˜sustainable weaponryā€™. And now, via this research, we know that far fewer US voters are paying close attention to conventional sources of news. What happens to presidential campaigns in this kind of media environment? Weā€™re going to find out. // Meanwhile, the longterm structural challenges are clear. Decades ago, the pioneers of Web 2.0 ā€” Iā€™m looking at you, Zuck ā€” sold us on the idea that a connected world would mean a world informed and enlightened as never before. It hasnā€™t turned out that way. In fact, social media has turned many away from news as traditionally defined, and towards unverified gossip and conspiracy theory. The institutions and processes of our democracies evolved to function in symbiosis with an established media that operates under certain standards, and that is the primary source of information for voters. All that is now falling apart. Our democracies ā€” what they are, how they work ā€” are going to change. The 2024 presidential elections will be a window on to what is coming.

    šŸ—ŗ Hello world

    This newsletter has watched the unfolding generative AI text-to-image revolution closely. But itā€™s always had one eye on another, even more compelling destination: text-to-worlds.

    Now, that dream is being realised.

    Researchers from the Australian National University, the University of Oxford, and the Beijing Academy of Artificial Intelligence this week showcased a new AI model called 3D-GPT. It generates 3D worlds based on natural language text prompts provided by the user.

    According to the research paper, model deploys multiple AI agents to understand the text prompt and execute 3D modelling inside the open source modelling platform Blender.

    See that paper for more on some of the worlds generated, including ā€˜A serene winter landscape, with snow covered evergreen trees and a frozen lake reflecting the pale sunlight.ā€™

    āš” NWSH Take: 3D-GPT takes its place alongside this prototype text-to-world tool created by Blockade Labs, which I wrote about back in April. Where is all this heading? Weā€™re still pretty deep in a metaverse winter right now, though there are signs of a thaw; the most obvious being the imminent arrival of the Apple Vision Pro mixed reality headset, which could act for millions as a gateway into more sophisticated virtual environments. While the word metaverse is probably damaged beyond repair, I still believe that immersive virtual worlds will play a role in our shared future. // What weā€™re talking about with text-to-world models, though, is even more head-spinning. 3D-GPT builds worlds that we look at via a screen. But eventually, weā€™ll be able to create entire, immersive, highly realistic VR worlds simply by describing them. In this way weā€™ll become something akin to sorcerers, able to confect new realities on command. That will transform video gaming and film. It will fuel new art forms and modes of collective expression. And, ultimately, it will change our relationship with reality ā€” that is, with this reality ā€” itself.

    šŸ—“ļø Also this week

    šŸŽØ A new anti-AI tool allows artists to prevent AI models such as DALL-E from using their work as training data. Nightshade, dubbed a data poisoning tool, can be attached to creative work; if that work is scraped to be used to train an AI model, then Nightshade will corrupt the entire training database. Weā€™re going to see a rising number of disputes between owners of creative IP and the owners of AI models who used that work as training material. See also, this week, Universal Music Groupā€™s lawsuit against Anthropic; UMG say Anthropic unlawfully used its song lyrics to help train the Claude AI chatbot. And now major newspapers, including the New York Times, are seeking payment from OpenAI for use of their content to help train GPT-4.

    ā˜€ļø The International Energy Agency says the global shift towards renewable energy is now ā€˜unstoppableā€™. The Agencyā€™s latest World Energy Outlook report says renewables ā€” mainly solar and wind ā€” will provide half the worldā€™s electricity by 2030.

    šŸ›° NASAā€™s interstellar Voyager probes had a software update beamed to them across a distance of 12 billion miles. The probes launched 46 years ago, on a mission to explore deep space. These updates are bug fixes, intended to stop Voyager 1 sending corrupted data back to mission control, and to stop gunk building up in the thrusters on both probes.

    šŸ™ˆ Elon Musk says he may remove X (formerly Twitter) from the EU in response to new rules that ban the spread of harmful content. The new Digital Services Act is intended to hold social media platforms accountable for fake news, false advertising, and on-platform criminal activity.

    šŸ­ Nvidia and Foxconn say they are partnering to build a number of ā€˜AI factoriesā€™. They will be next-generation data centres that use Nvidiaā€™s AI chips to train the AI models that fuel robots, autonomous vehicles, and generative AI apps.

    šŸ¤– The CEO of DeepMind, Demis Hassabis, says the risks posed by AI should be taken as seriously as those posed by climate change. Hassabis called for international regulatory oversight of AI, and said technologists should take inspiration from the International Panel on Climate Change (IPCC).

    šŸ‘¶ A Dutch startup, Spaceborn United, wants to see if itā€™s possible to create human babies in space. The company says that in 2024 it will send a satellite-lab into low Earth orbit and there attempt to conduct in-vitro fertilisation (IVF). CEO Egbert Edelbroek hopes the technology can pave the way for humans to be born in future space colonies.

    šŸ˜³ A British journalist went undercover at Amazon and did not like what he saw. Oobah Butler found that it was possible to list bottles of Amazon delivery driver urine(!) for sale on the platform. And claims that Amazon is using devious tactics to avoid worker unionisation.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,069,001,802šŸŒŠ Earths currently needed: 1.8454

    šŸ—“ļø 2023 progress bar: 82% complete

    šŸ“– On this day: On 26 October 1977 the last human case of smallpox was diagnosed in Ali Maow Maalin, a hospital cook from Somalia. The WHO and CDC consider this date to mark the eradication of the disease via the smallpox vaccine.

    Back Again

    Thanks for reading this week.

    The emergence of text-to-world AI models ā€” and the future they promise of new realities on demand ā€” is dizzying.

    This newsletter will keep watching, and working to make sense of it all.

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Itā€™s great to be back in your inbox. Thanks for having me. Iā€™ll return, of course, next week.

    Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Itā€™s a bumper instalment this week. Whatā€™s coming up?

    Google showcase their AI arsenal at the annual I/O developers conference.

    Meanwhile, new research reveals that the ocean currents may be about to take a weird turn, with disruptive results for the global climate. And a Snapchat influencer launches an AI version of herself to her 1.9 million followers.

    Letā€™s go!

    šŸ’„ The search empire strikes back

    This week, Alphabet leaned hard into an AI everywhere, for everyone strategy at its I/O developers conference.

    CEO Sundar Pichai announced PaLM 2, an update on the companyā€™s primary large language model. Googleā€™s Bard chatbot is now fuelled by the new model, and has been made available globally with no waitlist.

    There was much more. Google execs announced new AI features in Maps, and a powerful new magic editor for photos that brings Photoshop-like capabilities into the phone. Pichai said AI around one zillion times, and Google later published a handy summary of all the announcements.

    The centrepiece, though, was a demonstration of Googleā€™s plans to weave generative AI through search. In this new search experience AI-generated results take up most of the first screen; users in the US can now access this experimental version of search via Google Labs.

    The I/O conference wasnā€™t the only source of intriguing announcements from Google, though. The company also launched Geospatial Creator, an impressive tool that allows creators to build and publish geolocated AR installations. Essentially, to build a digital object and drop it anywhere on the surface of the Earth.

    The tool is powered by the Google Maps platform, and integrated into Adobe Aero and Unity.

    āš” NWSH Take: Google researchers invented the transformer models that underpin this generative AI revolution. But across the last two years the tech giant has watched OpenAI steal its thunder. This weekā€™s I/O conference was a statement of intent: weā€™re taking back control. Competition can only be good for users, many of whom will have gone straight to the new PaLM 2-powered Bard to compare it to ChatGPT. My anecdotal experience is that Bard is faster ā€” ChatGPT with GPT-4 is a little slow ā€” but the consensus at the moment is that itā€™s currently less factually reliable. Meanwhile Google is working on Gemini, a multimodal LLM clearly intended as a GPT-4 killer. The war for supremacy between Alphabet and OpenAI-Microsoft is just getting started. // Geospatial Creator was overshadowed by I/O, which feels fitting for a year in which the metaverse has been comprehensively out-hyped by AI. But the tool is intriguing glimpse of the emerging unified digital-physical field. Build a digital sculpture from your desk in London, and drop in into a park in SĆ£o Paulo for your subscribers to view. And pretty soon, via text-to-everything models, youā€™ll be able simply to describe that sculpture and watch an AI model build it for you. A couple of years ago I wrote about the ways in which AR will change our relationship with a shared physical reality. I stand by those ideas, but in the age of generative AI that essay needs an update; one will be coming soon.

    šŸŒŠ Climate weirding

    Also this week, new research says changes in the ocean currents may soon enweirden the climate of northern Europe.

    The Beaufort Gyre moves in a clockwise direction around the western Arctic Ocean, and helps regulate sea ice formation in that region. Scientists have long suspected that climate change is causing changes to the Gyreā€™s movement.

    This new paper, Recent State Transition of the Arctic Oceanā€™s Beaufort Gyre, was published in Nature, and makes use of satellite data collected between 2011 and 2019. It provides the first observational confirmation that the Gyre is slowing and has entered a new ā€˜quasi-stable stateā€™.

    This means, say the scientists, that the Gyre may soon expel a massive amount of icy fresh water into the North Atlantic.

    And that could spark further ocean current changes that cause the climate in western Europe to become significantly cooler.

    āš” NWSH Take: Yes, cooler. Iā€™m no ocean currents expert, and I found this quick explainer on the Beaufort Gyre helpful. Essentially, the Gyre periodically sucks in a ton of icy fresh water and then exhales it, and itā€™s now long overdue an exhale; when that massive exhale comes it could send other ocean currents askew in ways that dramatically cool western Europe. Remember, the Gulf Stream ā€” a major ocean current responsible for several global weather patterns ā€” has slowed by around 16% already; scientists are scrambling to understand how a huge Beaufort Gyre exhale will impact this. // The upshot? One way or another, weā€™re probably about to undergo a climate weirding on a scale that few of us are ready for. While drought and fires rage in some places, a new freeze will break out in others. At the outer edges of this is the risk the Gulf Stream shuts down entirely, triggering rapid and chaotic climate disruption fuelled by a set of feedback loops. These processes are hugely complex; weā€™ll see much more work such as this attempt to build machine learning-fuelled simulations that give us advance warning of ocean current shift. Perhaps NVDIAā€™s coming and massive Earth-2 simulation can help.

    ā¤ļø Hey girlfriend

    Regular readers know that virtual companions are a longstanding NWSH obsession. This week, another glimpse of what is coming.

    Snapchat influencer Caryn Marjorie, who has 1.9 million followers on the platform, released an AI girlfriend version of herself. Users pay $1 per minute to chat to CarynAI, which the creator says is built on top of GPT-4 and trained on over 2,000 hours of her video and voice content.

    Marjorie says the bot made $72,000 in the first week of release. She says that it could make around $5 million per month if 20,000 people ā€” or just 1% of her Snapchat followers ā€” subscribe.

    So far, things seem to be going well:

    āš” NWSH Take: Back in 2013 I started telling leaders in big corporations that a new age of AI-fuelled conversational agents was coming. That people would even have 'relationships' with these new virtual entities; that it would be something way beyond Siri ā€” their best reference point at the time. Some learned forwards; some raised a sceptical eyebrow. My constant refrain back then? I know it sounds like science-fiction, but itā€™s coming. Well, itā€™s here. Virtual Companions are set to unlock new manifestations of some of the deepest and most powerful human impulses: social connection, friendship, intimacy. // Observing this truth is not the same as celebrating it. What happens to authentic human connection in a world in which we simulate it ā€” and commodify those simulations ā€” in this way? What harms are we doing to vulnerable people who become attached to, even dependent on, these creations? // The central message still pertains: itā€™s weird but itā€™s happening. In the end I canā€™t help feeling that so much about contemporary living on the internet ā€” the way it atomises our attention, the simulation of human relationships ā€” must push us to finally realise that authentic human being together is the only sphere of activity invulnerable to technological advance. No machine can be a human, truly seeing you as another human. In the age of the machine, that truth becomes sacred.

    šŸ—“ļø Also this week

    āš› Microsoft announced a partnership with fusion power provider Helion Energy. The deal will see Microsoft buy electricity created by a Helion fusion plant, which is expected to be operational by 2028; Helion says it marks the worldā€™s first fusion power purchase agreement between two companies. Microsoftā€™s Azure Cloud platform will need vast amounts of compute power ā€” using stupendous amounts of energy ā€” given its commitment to support OpenAI and its commercialisation of ChatGPT. Iā€™ll be writing more soon about the emerging symbiotic relationship between energy and AI.

    šŸ›° NASA launched two storm-observing satellites, called CubeSats, intended to study tropical cyclones. The pair will form part of a constellation of four identical satellites that will stay in low Earth over the planetā€™s tropics, allowing them to pass over any given tropical storm around once per hour.

    šŸ‘Øā€āš•ļø Pharma company BioNTech is developing an mRNA vaccine against pancreatic cancer. In encouraging early trial results, the vaccine prevented tumour recurrence after surgery in eight of 16 patients.

    āš–ļø Startup Anthropic revealed its approach creating an AI with values. Anthropicā€™s Constitutional AI approach see it train its AI assistant, Claude, on a set of initial principles drawn from various sources including the United Nations Declaration of Human Rights. The AI then applies these principles itself to help it choose the most ethical response. This is in contrast to the approach used by OpenAI and Google, which sees human moderators train the AI to avoid toxic outputs.

    šŸŒ¬ Wind is now the single largest source of electricity in the UK. In the first quarter of this year wind turbines accounted for one third of all electricity used in the country. It marks the first time wind has generated more of the countryā€™s power than gas. The UK wants its entire electricity use to be emissions free by 2035.

    šŸŒŒ California-based startup Vast Space say they will launch the first commercial space station. The startup says it will launch the first part of the station, an outpost called Haven-1, on a SpaceX rocket in 2025. Vast Space want eventually to grow the station into a 100-metre long multi-module station that spins to create onboard artificial gravity.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,032,723,422šŸŒŠ Earths currently needed: 1.8036642811

    šŸ’‰ Global population vaccinated: 64.4%

    šŸ—“ļø 2023 progress bar: 36% complete

    šŸ“– On this day: On 13 May 1950 the inaugural Formula One World Championship race takes place at the Silverstone Circuit in England.

    My Generation

    Thanks for reading this week.

    Online search revolutionised our relationship with knowledge. Now, generative AI is set to enact yet more change. Itā€™s another case of new world, same humans.

    This newsletter will keep watching, and working to make sense of it all. And thereā€™s one thing you can do to help: share!

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    The generative AI rollercoaster is thundering forward at increasing speed.

    This week, researchers at Stanford and Google use an enhanced large language model to create simulated people that can remember, plan, and talk to one another in pursuit of their longterm goals.

    Also, a new report says weā€™ve reached a landmark moment for the global energy system. And amid rumours of financial difficulty, Stability AI release a new text-to-image model for enterprise users; itā€™s capable of amazing photorealism.

    Letā€™s go.

    šŸ  Welcome to SimGPT

    This week the generative AI talk orbited around autonomous agents. That is, AI systems that can act autonomously in pursuit of pre-defined goals.

    Researchers at Stanford and Google explained how they used a large language model (LLM) to create 25 simulated people, who were then set loose inside a virtual town called Smallville.

    To create these sims, the researchers hooked up their LLM to an architecture that allows each AI agent to store memories of its past experiences, and then to access relevant memories and use them to plan new actions. Each agent was imbued with its own persona, for example: 'John Lin is a pharmacy shopkeeper at the Willow Market and Pharmacy who loves to help people. He is always looking for ways to make the process of getting medication easier for his customers.ā€™

    The results, say the researchers, were ā€˜believable individual and emergent social behavioursā€™ that saw Smallville become a bustling little town full of autonomous chit-chat, group activities, and trips to the local cafĆ©.

    ā€˜ā€¦for example, starting with only a single user-specified notion that one agent wants to throw a Valentineā€™s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time.ā€™

    All this comes amid vast excitement on Twitter this week over the rise of useful generative agents.

    As with the above research, these innovations ā€” which include AutoGPT and BabyAGI ā€” leverage architecture that enhances ChatGPT by allowing it to lay down and then access a stream of past actions, or ā€˜memoriesā€™. Combine that with a plugin the enables ChatGPT to browse the web, and the result is a system that can take an initial goal, get started online, and then prompt and re-prompt itself until itā€™s finished:

    āš” NWSH Take: If SimCity or The Sims formed a part of your childhood, then this new research is hypnotic. It makes clear that LLMs will enable us to simulate goal-directed people, and watch as complex social and behavioural dynamics unfold. Imagine video games populated with these simulated humans (hello, Electronic Arts). But weā€™ll also see the rise of new art forms ā€” a mixture of game and movie ā€” built around them. And then will come the ability to simulate large populations, allowing for new insight on collective phenomena such as voting behaviours, the spread of disinformation, and the evolution of the economy. This new age of AI-fuelled simulation ā€” which Iā€™ve been writing about for a while ā€” is emphatically here. // Meanwhile, AI agents such as AutoGPT promise to elevate the usefulness of generative models for millions of individual users. Itā€™s already clear that for most people using LLMs wonā€™t be about sitting at the prompt line and figuring out great prompts. Instead, wrappers such as this one ā€” which puts AutoGPT-like powers direct into your browser ā€” will allow users to set a goal and then let the LLM iterate its own way to a useful output. Give it a try; something hugely powerful is happening.

    šŸ”Œ New power generation

    Also this week, news of a landmark moment for energy.

    A new report from independent energy think tank Ember says solar and wind accounted for a record 12% of global electricity generation in 2022. Thatā€™s up from 10% in 2021. The increase in wind generation alone in 2022 was the equivalent of the entire annual electricity demand of the UK.

    Whatā€™s more, says the report, itā€™s likely that 2023 will see electricity generation via fossil fuels ā€” mainly coal and natural gas ā€” hit their peak.

    The research team analysed data from 78 countries, representing 93% of global electricity demand.

    āš” NWSH Take: Around two-thirds of the worldā€™s electricity is generated by burning fossil fuels. But the transition to solar and wind is now reaching a blistering pace, thanks largely to exponentially falling cost. In 1956 the cost of one watt of solar capacity was $1,825; now it can be as little as $0.72. // If Ember are right, weā€™ll soon start generating more electricity via fewer fossil fuels: power up, emissions down. That aligns with the International Energy Agencyā€™s most recent and broader forecast; they now have global demand for fossil fuels ā€” via electricity generation or any other use ā€” peaking or plateauing under all their future scenarios, even without any shift in current government policies. // Weā€™re approaching, then, a historic turning point: the decoupling of economic growth and fossil fuels for the first time since the Industrial Revolution. Itā€™s becoming possible to image a world of endless, near-zero cost clean electricity. A world of clean energy abundance. What will that make possible?

    šŸ–¼ Get real

    Do you want to look at some amazing AI generated images? Yes, of course you do.

    This week Stability AI released Stable Diffusion XL, a text-to-image model aimed at enterprise users. The model is an advance on Stable Diffusion 2.1, and excels at ultra-photorealism.

    The move comes amid reports that Stability AI is struggling with huge server and talent costs. These reports suggest the company is seeking a new round of funding, and that investors are wary given the current revenue. CEO Emad Mostaque has not commented.

    āš” NWSH Take: A few quick thoughts. The images are stunning; thatā€™s obvious. At this point weā€™ve pretty much entirely scrambled the role that the photograph once played in our culture as a form of proof or marker of veracity. In the wake of puffa coat Pope, Iā€™ve already developed a new reflexive habit: is this real or AI? // As for the rumours about Stability AI, they amount to: ā€˜AI startup experiencing rocket ship growth is struggling to figure out revenue and is a chaotic place to workā€™. Nothing too surprising. Whatever storms the company is experiencing, I hope it can weather them; Mostaqueā€™s vision of AI for the people is a necessary counterweight to the closed model that is being operated by (the misleadingly named) OpenAI and others.

    šŸ—“ļø Also this week

    šŸ‡ØšŸ‡³ The CCP has issued new rules on the training and outputs of generative AI models. Draft rules from the Cyberspace Administration of China say the outputs of those models must reflect the core values of socialism and not undermine the power of the state. This came as Chinese tech giant Alibaba announced plans to roll out its LLM rival to ChatGPT, Tongyi Qianwen, across all its products.

    šŸ‡ŗšŸ‡ø The US government is also looking to establish new regulations around AI. The National Telecommunications and Information Administration is asking for feedback from the public and experts from industry and academia, and wants to establish ā€˜guardrailsā€™ to ensure AI is safe, transparent, and as unbiased as possible.

    šŸ‘®ā€ā™‚ļø The Boston Dynamics robodog will patrol the streets of NYC on behalf of the New York Police Department. The NYPD experimented with Spot the Dog in 2021 and faced criticism from civil rights organisations. Now the new mayor, Eric Adams, is bringing Spot back.

    šŸš— Ford says it will spend $1.3 billion to convert its 70-year-old factory in Oakville, Canada, into an assembly plant for electric vehicles. The auto giant says it wants the production capacity to sell 2 million EVs a year worldwide by 2026

    šŸ’‰ Ghana became the first country to approve a ā€˜game-changingā€™ malaria vaccine. Trial data indicates the R21 vaccine was up to 80% effective when given in three doses plus a booster after one year. Malaria kills around 600,000 people each year, many of them children.

    šŸŒ– China says it will build a permanent base on the Moon using bricks made from Moon dust. The South China Morning Post reported that officials say building will start in 2028. Back in October I wrote on how NASA is preparing for potential geopolitical tensions arising out of multiple Moon missions by the US and China.

    šŸŖ Four volunteer test subjects will spend a year locked in a simulated Martian environment as part of NASA research for a mission to Mars. The 3D-printed structure is situated in a warehouse at the Johnson Space Center in Texas, and is intended to simulate a future NASA base on Mars. The volunteers will grow their own food, conduct experiments, and exercise.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,027,591,427šŸŒŠ Earths currently needed: 1.8019217768

    šŸ’‰ Global population vaccinated: 64.3%

    šŸ—“ļø 2023 progress bar: 28% complete

    šŸ“– On this day: On 15 April 1755 Samuel Johnsonā€™s A Dictionary of the English Language is published in London.

    Designs for Life

    Thanks for reading this week.

    We humans have always been obsessed with our own reflection. And now we have a new way to study it: by using LLMs to create simulated humans that chat to one another, organise parties, and visit the local shops. Itā€™s yet another case of new world, same humans.

    This newsletter will keep watching, and working to make sense of it all. And thereā€™s one thing you can do to help: share!

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    I put the newsletter on pause last week; did I miss anything?

    Given what is unfolding right now, itā€™s hard to make this newsletter anything other than a generative AI revolution update. I donā€™t want to stoke the hype yet further, but Iā€™ve never seen anything quite like this.

    Given all that, this week weā€™ll dive into a high-profile petition to pause work on new generative models. Also, weā€™ll look at the new hyperreality taking shape around us via Midjourney and its community of inventive users.

    But itā€™s not all AI; thereā€™s also an intriguing new report on global population change from the Club of Rome.

    Letā€™s get into it.

    šŸ¤– For the people

    This week, another generative AI story that pushes 2023 deeper into the realms of what seemed, recently, possible only in science-fiction.

    Itā€™s not yet another platform, plugin, or viral image (more on those below) but a call to slow down. Over 1,000 technology leaders signed a petition demanding a pause of at least six months on the training of AI systems more powerful than GPT-4.

    Signatories included Elon Musk, Yuval Noah Harari, Stability AIā€™s Emad Mostaque, and Apple co-founder Steve Wozniak. And their language was pretty apocalyptic:

    According to the authors of the petition: ā€˜recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one ā€“ not even their creators ā€“ can understand, predict, or reliably control.ā€™

    The scale of their concern was lent support by a research paper published last week. It saw Microsoft researchers report that GPT-4 shows ā€˜sparks of AGIā€™. The model, they point out, shows high-level competence across mathematics, coding, vision, medicine, law, and psychology, and can solve novel problems in those domains without any need for special instructions: ā€˜in all of these tasks, GPT-4's performance is strikingly close to human-level performanceā€™.

    Meanwhile, the iconic and ever-controversial AI safety expert Eliezer Yudkowsky went full pelt in Time magazine. He didnā€™t sign the petition, he says, because it doesnā€™t go far enough:

    All training of large models, says Yudkowsky, needs to be shutdown indefinitely and worldwide. He says the governments of Earth must come together in a concerted effort to stop an AI-fuelled human extinction.

    āš” NWSH Take: Pretty intense, right? Yudkowsky is, and has long been, an outlier on all this. Meanwhile, others say this weekā€™s petition signatories have fallen prey to OpenAIā€™s apocalypse marketing: a plan to get everyone scared and then sell subscriptions. // For my part, I donā€™t think AI annihilation is imminent; nor do I think these fears are founded only in hype. GPT-4ā€™s competence across all kinds of reasoning tasks is insane. And for all the reams of coverage (guilty), I donā€™t think weā€™re anywhere near processing the implications. It no longer seems far-fetched that an AI model could start behaving in strange and uncontrollable ways in the near-term. Itā€™s emphatically time to get serious about alignment. // Alignment is first a technical problem: how do we make sure AIs only do what we want them to do? After this, though, it becomes a political problem. Whose values should we align our AIs with? Those of Californian tech bros? // We canā€™t put the AI genie back in the bottle, and in practice a global ā€˜pauseā€™ is highly unlikely. That means only answer here is to speed up research on the technical challenge of alignment, and to allow a plurality of AIs, empowering different peoples and communities to live and create according to their own value systems. That is real alignment. To that end, check out open source AI group LAIONā€™s petition for a new internationally-funded supercomputer to train open source foundation models.

    šŸ“ˆ Growth mindset

    Also this week, a huge if true forecast on the future of the human population.

    A new study commissioned by the Club of Rome forecasts that if current trends continue then the global population will hit 8.8 billion in around 2050, before declining rapidly to 7.8 billion by the end of the century.

    The study, conducted by think tank Earth4All, also games out a scenario in which governments invest in policies known to curtail population growth, such as education and social services. Here, population peaks at 8.5 billion in around 2040 and falls to 6 billion by 2100.

    Both projections are far below last yearā€™s UN Population Prospects forecast, which had population peaking at 10.4 billion in the 2080s.

    The Club of Rome is best known for the now (in)famous 1972 report The Limits to Growth, which warned of impending environmental crisis and social breakdown due, in part, to strains imposed by overpopulation.

    The report came amid a wave of neo-Malthusian anxiety in the decades after WWII. A 1968 book called The Population Bomb ā€” which influenced the thinking of the Club of Rome ā€” raised the spectre of hundreds of millions of people starving to death as population growth exceeded food supply.

    āš” NWSH Take: The original Limits to Growth report is today the subject of fierce disagreement. Critics say the Club gave voice to unfounded fears motivated by an ideological distaste for modernity. Proponents point out that the report offered a number of different scenarios, and that the growth-induced systemic breakdown it envisioned may yet eventuate. // This new statement on population could end up being just as contested. The Club now accept that their population bomb wonā€™t go off. And they celebrate their finding that population is set to peak sooner and lower than the UN expect ā€” stressing that itā€™s good news for the environment. Meanwhile, though, a niche but growing school of thought says that population collapse is the real crisis coming down the track; rapidly shrinking and ageing populations, runs this line, will kill productivity and threaten economic collapse. // Where does the truth lie? Most mainstream demographers say population collapse isnā€™t on the cards, and that ageing populations donā€™t have to mean economic calamity. Meanwhile, itā€™s not overpopulation but intense patterns of high and damaging consumption in the rich world that are the primary drivers of climate change. As ever with demography, it seems the truth lies between the extremes.

    šŸŽ­ Real life

    Version five of the text-to-image tool Midjourney was released two weeks ago. And this week, users went wild.

    On Reddit, Midjourney enthusiasts started sharing photorealistic, news report style images of historical events ā€” such as 2001ā€™s devastating Great Cascadia earthquake in Oregon:

    The truth, of course, is that no such event took place; this is all fictitious ā€” a AI-fuelled experiment in alternative history.

    Meanwhile, Chinese users of the tool are creating pseudo-documentary images of the southwestern city of Chongqing in the 1990s.

    All this comes days after the first truly viral AI-generated image: of Pope Francis in a white puffer coat.

    āš” NWSH Take: In his 1981 book Simulacra and Simulation, the French philosopher Jean Baudrillard wrote about hyperreality: the emergence of a media environment in which the boundaries between the real and our representations of the real become ever-more blurred. Digital media massively amplified that phenomenon. All of us recognise the feeling, today, of living inside a tech-fuelled hall of mirrors in which the difference between image and reality is hard to discern, or even meaningless. // What can be said? That was before this generative AI revolution and tools such as Midjourney, which are now achieving photorealism that is impossible to distinguish from the real thing. These AI-generated pseudo-photos are perfect representations of representations; signs that point only to other signs ā€” exactly the phenomenon that Baudrillard put at the heart of his theory. They make possible a whole new level of alternate history; a convincing mass media documentation of events that never took place. Thereā€™s going to be so, so much more of this.

    šŸ—“ļø Also this week

    šŸ° This Twitter user made a AI-fuelled virtual companion by hooking ChatGPT to a cute but grumpy holographic rabbit avatar. Itā€™s just one signal of how the generative AI revolution will unleash a tsunami of virtual companions; the rise of this trend is a longstanding NWSH obsession.

    šŸ§  Direct brain interface startup Neuralink is searching for a partner to help it run clinical trials on humans. In 2022 the FDA rejected Neuralinkā€™s application to start human trials; the company has since been working to address the safety concerns that were raised.

    šŸ›© A Swiss startup is working on a hydrogen-powered jet that it says will cut flights from Europe to Australia to four hours. Destinus has been testing prototypes for two years, and is now partnering with Spainā€™s Ministry of Science. It currently takes around 20 hours to fly from Europe to Australia.

    šŸ‘Øā€šŸ’» A new report says ChatGPT could impact 300 million full-time jobs across the globe. The report by Goldman Sachs economists says the technology is ā€˜a major advancement with potentially large macroeconomic effects.ā€™ But most jobs, they say, will be complemented by AI rather than replaced entirely.

    šŸ›’ Chinese ecommerce titan Alibaba is planning to break itself up. The company says it will split into six business units, some of which may be listed or sold. The announcement seems intended to placate the CCP, which across the last three years has moved aggressively to diminish the power of domestic tech giants.

    šŸ° Disney has reportedly fired its entire metaverse division. Last year the entertainment giant called the metaverse ā€˜the next great storytelling frontierā€™ and announced plans to bring blended digital-physical experiences to its parks. The company has recently been under pressure from investors to cut costs.

    ā›”ļø This just in as the newsletter goes to press; the Italian government has banned ChatGPT citing concerns over data privacy breaches. The Italian Data Protection Authority says the move is temporary and will be revoked ā€˜when ChatGPT respects privacyā€™. OpenAI CEO Sam Altman says the company ā€˜defers to the Italian governmentā€™, but believes it has followed all relevant privacy laws.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,025,029,075šŸŒŠ Earths currently needed: 1.8010517836

    šŸ’‰ Global population vaccinated: 64.4%

    šŸ—“ļø 2023 progress bar: 25% complete

    šŸ“– On this day: On 1 April 1976 Steve Wozniak and Steve Jobs found Apple Computer in California.

    Speed Warning

    Thanks for reading this week.

    The ever-more urgent quest to conform machine intelligence to our values is yet another classic case of new world, same humans.

    Iā€™ll keep watching. And thereā€™s one thing you can do to help: share!

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to New World Same Humans, a newsletter on trends, technology, and our shared future by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    At the start of the year I promised the return of short notes. Hereā€™s the first: a meditation on the ChatGPT moment weā€™re living through right now.

    To avoid claims of false advertising: this one is more of an essay than a note.

    If youā€™d rather listen than read, just scroll up and hit play. But enough preamble; letā€™s get into it.

    The generative AI hype train is thundering forwards right now, and ChatGPT ā€” which was released in November ā€” was the fuel that accelerated it to its current speed.

    On the face of it, thatā€™s a bit odd. The underlying model, GPT-3, was made public almost two years earlier. Why the big noise now?

    ChatGPT uses an enhanced version of that model, and so produces better outputs. But my contention is that itā€™s the chat element ā€” that is, the conversational nature of the tool ā€” thatā€™s responsible for ChatGPTā€™s colonisation of the zeitgeist. People love the back-and-forth quality of interacting with this thing.

    Iā€™m interested in this, and the reasons for it. Because it seems to me that a quest to understand ChatGPTā€™s seductive conversational power can help us commune with a deep but under-appreciated truth about human thought.

    A truth that leads us, in turn, to some conclusions on our future relationship with machine intelligence.

    *

    In a seminal 1998 research paper, the philosophers Andy Clark and David Chalmers introduced an idea they called the extended mind thesis (EMT).

    The EMT says that mind is best understood as a set of cognitive processes that extend beyond our brains and into the external world. Consider, for example, a person using a notebook and pen to help perform a series of simple calculations. The notebook and pen are, say Clark and Chalmers, just as much a part of the cognitive processes at work here as the personā€™s brain. The notebook, for example, is acting as a kind of external memory bank.

    Itā€™s arbitrary then, according to the EMT, to say that mind is happening in the brain but not in the notebook; instead, the brain, pen, and notebook are part of one big cognitive system, and we can best understand that system as mind.

    It was an arresting argument, and itā€™s proven an influential one. Whatā€™s more, 25 years on we citizens of the internet have been delivered into a relationship with technology that makes tangible the strengths of this idea.

    Iā€™m talking, here, about our relationships with our phones.

    I tend to do my deepest thinking when Iā€™m out for a walk. Often, Iā€™ll reach for some half-remembered fact, person, or quote that I need to continue my train of thought, find that I canā€™t recall it, and then go to my phone to look it up. My phone, here, or perhaps more properly the internet itself, is acting as a kind of extension of my own memory ā€” one containing pretty much all the knowledge in human history that can be encoded as words or pictures. And the whole process is so seamless ā€” think, encounter block, look it up, keep thinking ā€” that the phone really does feel a natural extension of my mind. When I forget my phone, the feeling is one of my thought process being constantly interrupted. At its most acute it feels as though a part of me is missing.

    ChatGPT offers users the same kind of feeling. The feeling, that is, of having your mind extended beyond the confines of your skull. Itā€™s perhaps the first technology since the iPhone to offer that experience in a compelling new way. That truth, surely, has helped drive the excitement over the last three months.

    But the current ChatGPT moment is not driven only by the feeling that the tool allows for mind extension. Thereā€™s also the feeling that the mind extension happening is a sudden and dramatic evolution of anything weā€™ve experienced before via notebook, calculator, or the phone as portal to the internet. Thereā€™s a widespread feeling out there that ChatGPT is an early signal of a revolution of era-defining consequence ā€” even though, in truth, we havenā€™t yet seen the use cases, or the impact on the economy, to justify that belief.

    Why is this? Why does ChatGPT feel such a big deal?

    The answer Iā€™m fermenting: itā€™s because ChatGPT taps into, in a way even the phone does not, a deep truth about human thought. That is, its fundamentally dialogic, or conversational, nature.

    The idea that underpins this is simple: itā€™s that when we think, we talk to ourselves. What you call your ā€˜internal monologueā€™ is really a dialogue conducted by one person. Someone is talking (internally, not aloud) and someone is listening and will then reply, and those people are both you.

    *

    The idea that human thought is fundamentally dialogic has a long history, which passes through the 20th-century Russian philosopher and literary critic Mikhail Bakhtin.

    Bakhtin said that language is primordially a social instrument: a process that evolved out of games of call and response conducted by two or more parties. And because language is the substrate that makes symbolic meaning and the higher forms of thought possible, that means thought, too, is fundamentally dialogic in nature.

    For we moderns this is a revolutionary idea. We tend to believe that thought, in its purest sense, is something that happens inside the mind of a single individual.

    Bakhtin, and others since whoā€™ve played with the idea of dialogic thought, invert this belief. They say that thought in its purest sense happens not inside the mind of one person but between groups of people; that is, between collections of minds. Under this view the extended mind thesis applies not only to the way individual minds can be extended by tools, but also, and primarily, to the way all our minds are necessarily extended by other minds. Indeed, under this view mind itself is best understood as a phenomenon that emerges between us, rather than inside any one of us individually.

    Itā€™s notable that the earliest works of philosophy in the western tradition seem to acknowledge the dialogic nature of thought. Socrates gathers others around him and together they engage in a process of back-and-forth reasoning that is, he tells them, the path towards enlightenment. The Socratic method taps deep into the idea that thought is primordially a social phenomenon.

    Via a complex psychospiritual process entangled with the evolution of the Enlightenment self, we lost touch with that truth. Instead, we came to see thought as, foremost, an inner and private unfolding. But in losing touch with the primacy of social thought, we also lost touch with another truth. Yes, thought conducted silently by one person is private and inner; but because it relies on the dialogic tool that is language, it too carries a fundamentally dialogic nature. When we think, we talk to ourselves.

    We might say that this strange ability to split the self ā€” so that we can at once talk and listen to ourselves talk ā€” is consciousness. That is to say, it is the state of self-awareness that only we among Earthā€™s creatures seem to possess in its highest form. The idea that language in some deep sense is human consciousness, that it creates the human mode of being in the world, is one I explore in depth in the ongoing essay series The Worlds to Come.

    *

    Iā€™ve argued for the idea that thought ā€” that consciousness itself ā€” is in some deep sense dialogic. What does all this have to do with ChatGPT?

    I hope the superficial connection is clear: in ChatGPT, we have an instrument that can externalise and amplify the internal dialogue that constitutes thought.

    As weā€™ve seen, weā€™ve always had access to entities that can externalise our inner dialogue: other people. But other people are beings with their own cognitive and social agency. They have personhood. ChatGPT, by contrast, is not a person; it is a tool.

    Itā€™s this dual quality that is new and special about ChatGPT: it allows for the externalisation of the dialogic essence of my private thought, while being a tool that is best understood as an extension of me, rather than a person best understood as essentially an other.

    In this way, ChatGPT offers a radically new form of mind extension. The excitement around it points to a submerged awareness among its users that this tool is more than just another useful app for summarising documents, or searching for information. We see in it, instead, the beginnings of a new way of doing thought. A way of externalising, and drawing out, an essential feature of our interior lives.

    Right now, ChatGPT enacts a highly imperfect version of this promise. While the quality of its responses is a great advance on anything weā€™ve seen before, itā€™s still prone to factual errors and occasional nonsense, and responses that are not wrong but in some way off, or just bland. But all this will be improved via larger models that are better able to retrieve factual information, and cope with context and nuance. Itā€™s the glimpse of what is ahead that has proven so exciting ā€” even shocking.

    Pretty soon, there will be a proliferation of such models. Weā€™ll all be able to customize our own, so that it knows our tastes, preferences, and cognitive styles.

    These models, trained as they are on an appreciable amount of all the text in existence, are a strange new instantiation of our shared linguistic inheritance. Itā€™s as though weā€™ve created a human hivemind and given it a voice, such that weā€™re now able to talk to it at will. When we think, we talk to ourselves: that truth is now manifest in a whole new way.

    Eventually, having a personal large language model (LLM) ā€” a virtual conversational companion in your pocket 24/7 ā€” will be no more remarkable than having a phone. When that time comes, in what ways will our thinking be amplified? In what ways will the nature and modes of our thinking change? And we must also ask: how might these models, which reflect back to us our own assumptions and prejudices, limit our thinking, or act to push us away from ideas and perspectives that lie outside the mainstream?

    *

    Those questions are valuable because when we ask them, weā€™re approaching a more accurate, and ultimately more fruitful, relationship with machine intelligence.

    Contrary to much of the hype and/or panic circulating at the moment, ChatGPT and other language models arenā€™t going to render higher forms of human thought or creativity obsolete. Theyā€™re not simply going to write our books for us, do our philosophy, tell us the answer. These models canā€™t think creatively in the commonly understood sense of that phrase, because theyā€™re not conscious beings responding to a lived experience of the world. They are, rather, stochastic parrots playing a high-level game of word association. Itā€™s just that when they play that game well enough, and effectively simulate a human interlocutor, theyā€™re able to amplify our thinking such that we arrive at cognitive destinations faster than we would have otherwise, or arrive at destinations that we would never have reached at all.

    In short, we need to understand that whatā€™s most exciting about these models is not what we will get straight from them; itā€™s what they will help us get from ourselves. And theyā€™ll help us most effectively, of course, if we bring our own powers of creativity and critical reflection to the party.

    If you havenā€™t experienced this aspect of ChatGPT, give it a try. Choose an idea, argument, or line of thinking, articulate it to the chatbot and then go back and forth, picking up on aspects of its responses that you find interesting and asking it to develop them, and then responding in turn. Donā€™t forget to challenge the assumptions that start to become apparent in ChatGPTā€™s responses, and ask yourself what itā€™s missing. Do that for five minutes, and see where you get. At its best, it can feel like the cognitive equivalent of driving a car instead of walking.

    For my part, this kind of conversation is already becoming commonplace. I can feel the seeds of a new habit taking root: Iā€™ll just take this to ChatGPT. And Iā€™ve started to wonder: how long until I come to feel the same about this tool as I do about my phone? How long until the ability to take a train of thought to ChatGPT is so expected, so natural, that when I donā€™t have access to the tool I feel as though my thought process has been interrupted? And how long until many others feel the same?

    What Iā€™m envisioning is a near future in which this ability to commune with the human hivemind, as made manifest by an LLM, comes to seem a natural part of thought. Yes, weā€™re a long way from that right now. But it feels as though weā€™re taking the first steps towards a new and powerful kind of augmentation.

    *

    At the outer edges of all this I wonder: is this the beginnings of the long process of human-technological convergence that transhumanists (think Ray Kurzweil) tell us is inevitable? A process that sees we humans, or at least some of us, become something else?

    Iā€™m not one of those who views the post-human future with unalloyed enthusiasm. But via generative models and other technologies ā€” including brain implants and techniques of genetic manipulation ā€” Iā€™m increasingly persuaded that some kind of Great Divergence is coming, in which we homo sapiens branch off from one another and become various different kinds of (post)humans.

    Certainly, the possibility that we may not all be the same humans for much longer haunts the borders of this newsletter. It increasingly seems to me that that our convergence with the technologies weā€™re building, and the almost impossible task of making any practical or moral sense of it, is the most important shared challenge we face.

    In that case, the project of the age is to begin, at least, to figure out where we stand. Perhaps we can take it to ChatGPT.

    Go Chat

    Thanks for reading this essay from New World Same Humans.

    Now that youā€™ve reached the end, why not take a second to forward this essay to one person ā€“ a friend, family member or colleague ā€“ whoā€™d also find it valuable? Or share it across one of your social networks, and let people know why itā€™s worth their time. Just hit the share button!

    Iā€™ll be back later this week as usual; until then, be well.

    David.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Itā€™s another packed instalment of New Week. What do we have in store?

    This week, tech giants from Microsoft to Snap put their arms around a trend that has long been a NWSH obsession.

    Meanwhile, Tesla and OpenAI outline far-reaching manifestos: what does it mean when corporations incubate the kinds of social and political policies we usually associate with government? And a new US startup, Figure, offers a first glimpse of its humanoid robot.

    Letā€™s get into it.

    šŸŽ­ Talk to me

    This week, a constellation of signals point to the mainstreaming of a trend long in the making. Iā€™m talking, here, about virtual companions.

    Snap launched My AI, a ChatGPT-fuelled conversational agent, inside their app. The feature is intended to serve as a general-purpose chatbot; Snap say it might plan a hiking trip for a long weekend, or suggest a recipe for dinner. Or even serve the userā€™s love of poetry:

    Meanwhile, Microsoft launched a new feature that ā€˜changes the personalityā€™ of the AI chatbot inside is Bing search engine.

    Users can toggle between three options: creative, balanced, and precise, depending on the type of answers they want from the bot.

    And in a Facebook update, Mark Zuckerberg said Meta are working on ā€˜AI personas that can help people in a variety of waysā€™ for Instagram, Messenger, and WhatsApp:

    ā€˜We're exploring experiences with text (like chat in WhatsApp and Messenger), with images (like creative Instagram filters and ad formats), and with video and multi-modal experiences. We have a lot of foundational work to do before getting to the really futuristic experiences, but I'm excited about all of the new things we'll build along the way.ā€™

    āš” NWSH Take: Itā€™s happening: the mainstreaming of a trend that Iā€™ve been tracking for years. I started speaking about virtual companions in the early 2010s; back then, the idea that millions would one day see AI-fuelled entities as counsellors, companions, or even friends seemed, to many, outlandish at best. Via generative AI, that conversation has been transformed. Bingā€™s tentative approach to creating AI personalities marks the beginning of a long collision between virtual companions and search. And it wonā€™t be long ā€” if Snap is anything to go by ā€” until a personalised, conversational, poem-writing virtual companion is a part of every social platform. // But this is just the start. For a glimpse of whatā€™s coming, check out the conversational generative AI platform Character.ai, where users are creating chatbots based on their favourite fictional or historical characters. Or the AI companion app Replika ā€” thousands of users recently complained that an update had destroyed their AI romantic partner. // Virtual companions ā€” a counsellor, friend, and philosopher in your pocket 24/7 ā€” are heading for the everyday lives of billions. Itā€™s an innovation that may prove as transformative as the car, or the iPhone.

    šŸ› Policy agenda

    Elon Musk took to the stage at a Tesla investor event this week, to unveil the long-awaited third part of the companyā€™s Master Plan.

    At the core of this part, it turned out, was nothing less than a Grand Unified Theory (GUT) of the planetary transition to sustainable energy. That GUT, according to Musk: electrify the grid, make all road vehicles electric, install a heat pump in every home, move towards green hydrogen, and build electric boats and planes.

    The global shift, said Musk, will need investment of $10 trillion. And he says Tesla can play a part in every step.

    Investors were reportedly disappointed; theyā€™d hoped for more detail on Teslaā€™s product roadmap. Shares fell 5% in the wake of the event.

    All this came in the same week that the IEA confirmed that CO2 emissions hit a record high ā€” albeit lower than expected ā€” last year.

    Still, it was another aspect of all this that caught my eye:

    āš” NWSH Take: Thereā€™s no denying the Master Plan Part 3 was vague; a high-level, wouldnā€™t it be great if march through the journey to decarbonisation. Still, see it alongside another much-vaunted corporate statement this week, and a pattern starts to emerge. Iā€™m talking, here, about OpenAIā€™s Planning for AGI and beyond. Itā€™s an amazing document, making clear that OpenAI will cancel its commitments to equity shareholders if it deems it necessary, and may in future fund ā€˜the worldā€™s most comprehensive UBI experimentā€™. In other words: we know our AGI might break capitalism, and weā€™re figuring out some answers. // In Tesla and OpenAIā€™s statements this week, then, we glimpse a truth. In ever-more acute ways, our governments simply canā€™t process the technology revolution weā€™re living through. Instead, itā€™s falling to technology companies to articulate the sociopolitical arrangements that will shape our future. On the one hand, itā€™s welcome that OpenAIā€™s Sam Altman seems to take this responsibility seriously; heā€™s talked endlessly in recent months about releasing AI advances gradually so as to minimise damaging social impacts ā€” compare that with the Zuckā€™s move fast and break things credo. On the other, the leaders inside these companies constitute a tiny, strange, and unaccountable elite. Are we okay with this? One idea whose time has come: publicly-elected representatives on the boards of these companies. I donā€™t pretend the idea is easy to enact. But itā€™s worth investigating.

    šŸ¤– Go figure

    US robotics startup Figure broke out of stealth mode this week, when it released the first images of its all-purpose humanoid robot.

    The company has generated excitement ever since news of its existence, and $100 million starting capital, was revealed in September. Figure is founded by Brett Adcock, also the co-founder of Archer Aviation, and it counts former Boston Dynamics, Tesla, and Apple engineers among its team of 42 staff.

    The core idea? We donā€™t have enough people.

    Demographic change, including ageing populations, means the labour force is shrinking. We live and work in built environments fitted out for human size and shaped beings. A new army of humanoid robots, says Adcock, is the answer to our labour and productivity woes.

    āš” NWSH Take: Plenty of people are on the same page as Adcock. Including Elon (yes, weā€™re talking about him again); at the Tesla event referenced in the previous story, Musk outlined his belief that humanoid robots will eventually outnumber people: ā€˜itā€™s not even clear what an economy means at that pointā€™. // Thereā€™s little doubt that a humanoid device of the kind Figure want to build would be economically transformative. The real question, though: how far away is it? And the answer: we arenā€™t really sure. Researchers at Oxford University recently asked AI experts for a view on this, and the experts were not much in agreement. // Figure revealed little on their timeline, and the roadmap for Teslaā€™s Optimus humanoid is similarly unclear. Alphabetā€™s Everyday Robots division is doing amazing work to bring together advanced robotics and large language models in order to create a household robot we can talk to as we do one another. At some point, surely, there will be a breakthrough moment. ChatGPT and the generative AI wave has already kicked off a great enweirdening of the global economy; things could soon get a whole lot more strange.

    šŸ—“ļø Also this week

    šŸŽµ TikTok says it will limit teen users to 60 minutes of screen time per day. Teens that hit the limit will be asked to enter a passcode to keep watching. The users set the passcode, and can disable the feature entirely if they wish. TikTok say the feature will help younger users manage their time on the app. Back in New Week #43 I wrote on how the CCP insists on camera-enabled facial recognition to limit the time Chinese youth spend on video games.

    šŸš« A new report says that a record number of countries enforced internet shutdowns in 2022. Internet rights group Access Now says 35 countries enacted 187 shutdowns, most triggered by mass protest or conflict. India came top of the list, with 84 shutdowns.

    šŸ¦¾ Microsoft launched a multimodal AI that can work with both images and language. Kosmos 1 can understand and label images, solve visual puzzles, perform visual text recognition, and understand natural language instructions. Microsoft say that multimodal AIs of this kind are the best route towards AGI.

    šŸŒ¤ The UN says scientists should find ways to reflect the sunā€™s rays away from the Earth. In a new report published this week, UN scientists said weā€™re not on track to limit warming to 1.5C, and should therefore study in more detail a ā€˜speculative group of technologiesā€™ that may allow us to reflect the sunā€™s heat.

    šŸ“» A US startup launched a new tool that uses GPT-3 to create an autonomous local radio show. RadioGPT will comb throw local news sources and Twitter feeds to create relevant scripts, and then use convincing synthesised voices to convert the scripts into radio shows that feature local news and classic pop hits. The platform can even be trained to emulate the voices of locally popular DJs. Last week I wrote about the transformative collision between mainstream media and generative AI.

    šŸ§  Scientists say that lab-grown brain organoids herald a new era of artificial biointelligence. First developed in 2013, organoids are tiny clumps of neurons cultivated from human stem cells; researchers at the Johns Hopkins Bloomberg School of Public Health this week published a paper in Frontiers in Science, laying out a roadmap for the convergence of conventional and organoid AI. Back in New Week #102 I wrote on how an organoid had taught itself how to play the video game Pong.

    šŸŒ” The European Space Agency says the Moon should have its own time zone. In a statement this week, the ESA said that it and other international agencies were working on an agreement to create a universally agreed lunar time and other standards for communications and navigation services.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,019,881,908šŸŒŠ Earths currently needed: 1.7993041243

    šŸ’‰ Global population vaccinated: 64.1%

    šŸ—“ļø 2023 progress bar: 17% complete

    šŸ“– On this day: On 3 March 1938 oil is discovered in Saudi Arabia, in an American-owned well in Dammam that soon becomes the worldā€™s largest source of crude oil.

    Always There

    Thanks for reading this week.

    The ongoing collision between conversational AI agents and the eternal human quest for counsel, friendship, and even intimacy is a classic case of new world, same humans.

    This newsletter will keep watching. And thereā€™s one thing you can do to help: share!

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Itā€™s a bumper instalment this week. What lies ahead?

    Generative AI is plunging us into a new world of infinite shadow and simulated media, and itā€™s going to be weird.

    Meanwhile, the results are back on the worldā€™s largest trial of the four day work week. And a California startup wants to bring your most treasured memories back to full, immersive life.

    Letā€™s get into it.

    šŸŽ­ In the hall of mirrors

    This week, a constellation of signals converged to send a message about the AI-fuelled hyperreality set to emerge around us.

    A few weeks ago I gave a brief mention to Nothing Forever, an entirely AI-generated version of the 1990s sitcom Seinfeld.

    The show featured blocky 8-bit style graphics, and a weird, occasionally funny script generated by GPT-3. It became a hit on streaming platform Twitch, but got banned when the Jerry character made transphobic comments. This week the creators, Mismatch Media, took to their company Discord to announce that the show will soon return to Twitch with new script controls in place to ensure there are no more toxic jokes.

    The show is part of a new wave of AI-generated shadow media thatā€™s emerged in the opening months of 2023. Look at this new continuously streaming ā€˜fully autonomousā€™ podcast starring a rolling cast of AI characters including Joe Rogan ā€” all of whom respond to questions typed into the chat by the audience on Twitch.

    Or see this surreal generated talkshow, featuring a virtual Conan Oā€™Brien and Chris Rock:

    Right now, these shows are more about creating intriguing experiments than they are about genuinely entertaining content. But they are strangely mesmerising. And part of their mesmeric power is the feeling that theyā€™re early signals of something huge, strange, and transformative for entertainment media.

    Itā€™s not just traditional, top-down media thatā€™s set to be impacted by AI. The kinds of representations that we used to call user-generated content will be revolutionised, too. One signal? Scroll through this amazing Twitter thread full of middle-aged people using TikTokā€™s new teen face filter, and becoming emotional as they stare back at their long-lost younger self:

    How long, I wonder, until people start using generative AI tools to create and deploy younger (smarter? more charismatic?) versions of themselves?

    āš” NWSH Take: Generative models will turn legacy media ā€” including nine decades of television ā€” intro training data. The result? Infinite dancing shadows based on iconic shows, and stars, of years past; see the AI Seinfeld above. Questions abound. Will an AI version of a hit show ever become a hit in its own right? Who owns the rights to such content? Weā€™ll see media companies ā€” and the estates of deceased film and TV stars ā€” build and license AI models of their own, allowing others to create new content based on their work. // Meanwhile, weā€™re about to be hit by a tsunami of generated media; Amazon is reportedly being flooded by AI generated books, and this iconic sci-fi magazine had to close submissions this week after being swamped by writers sending stories written by ChatGPT. The bar for average content will be raised. The trouble is, no one wants average content. Itā€™s not much use to, say, Disney, that theyā€™ll soon be able to make 100 quite good animated films at much reduced time and cost. No one wants 100 quite good films; they just want the best one. So the challenge for those who want to stand out will remain the same: theyā€™ll have to create exceptional stuff. But now, that will mean using AI to amplify the best human creators. // Meanwhile, every connected person will have the ability to become an AI-fuelled content machine. The French philosopher Jean Baudrillard wrote about hyperreality: the intertwining of the real with our representations, until the distinction becomes lost. A whole new AI-fuelled hyperreal is emerging around us. Iā€™ll be writing more about that soon.

    šŸ‘Øā€šŸ’» The great escape

    Last summer back in New Week #86 I wrote about the worldā€™s largest trial of the four day work week; it was all set to start here in the UK.

    This week the results were published. Those results came from 42 companies, each of which shifted to a four day week ā€” and a ā€˜meaningful reduction in working hoursā€™ ā€” between June and December while keeping staff on the same pay.

    The big message? Overwhelmingly, managers reported a success. A full 92% say theyā€™re continuing with a four day week. And revenue wasnā€™t negatively impacted; it grew 1.2% on average across the trial period.

    Some of the most marked results, though, were around the subjective life satisfaction of the 2,900 employees surveyed. See the graph, below, of perceived time inadequacy:

    Staff saying theyā€™d like ā€˜more time to care for children or grandchildrenā€™ fell by 27 percentage points. More time for own hobbies fell by 33 points.

    Meanwhile, 40% said they were sleeping better, and 54% said it was easier to balance work and home life. These are huge improvements across a six month period.

    āš” NWSH Take: The organisers of this trial, including advocacy group 4 Day Week Global, will put the results in front of British legislators this week. They want to persuade them that Britain should move definitively towards a 32 hour work week. Weā€™re a long way from anything like a consensus on that. But thereā€™s no doubt that the four day movement is gaining momentum; this trial continues the stream of good news from previous trials in Iceland and Japan. The truth, it seems, is that most knowledge workers simply donā€™t need a five day week to maintain their current output. // We donā€™t fully understand the reasons for this, but buried somewhere among them must be the truth that many workers currently arenā€™t using their time that efficiently. Collectively, then, we face a choice. We can find ways to improve efficiency, continue working five days, and really get the most out of them. Or keep output broadly stable, and switch to four days. // Judging by the results of this trial, most would choose the latter. And who can blame them? Whatā€™s the point of getting this rich, and of all these technologies of productivity, if doesnā€™t all combine to lead us to new and better modes of life? We must them come to ask: when machines do the work ā€” or allow us to do it much faster ā€” whatā€™s left for us? The answer: to do what only we can do: simply being there, being human, for one another.

    šŸ° Memory palace

    The metaverse hype train that powered through 2022 has lost speed recently. But this week, a reminder that the dream is still alive.

    Wist is an app that takes ordinary photos and turns them into immersive 3D projections ā€” allowing you to ā€˜step back inside your memoriesā€™ using an AR or VR device.

    Wist have just opened a private beta for their iOS app, and they say the service will soon come to the Oculus Quest.

    āš” NWSH Take: Immersive memories: itā€™s a compelling pitch. Even if it did remind many in the Twitter thread of an episode of Black Mirror. // The popular story around the so-called metaverse across the last few years ā€” itā€™s nothing, itā€™s everything, itā€™s nothing again, but this time with added cynicism ā€” is an eternal merry-go-round when it comes to emerging technologies. One weā€™ll no doubt see play out around generative AI across the coming year. The deeper truth when it comes to the metaverse? Yes, there was a whole ton of hype, much of it specious. Yes, many of the Big Names of 2020 and 21 will fade away. But the dream that is an immersive, useful, meaningful virtual world is real, and powerful. Virtual worlds will unlock new ways to serve fundamental human needs, new forms of self-expression, and even, as Wist signals, new modes of remembering. For that reason, we havenā€™t heard the last of the metaverse ā€” though I suspect that name will eventually fade away, to be buried alongside phrases such the information superhighway and surfing the net.

    šŸ—“ļø Also this week

    šŸ‘Øā€šŸ’» Amazon employees arenā€™t happy about the companyā€™s new return to the office instruction. CEO Andy Jassy last week wrote a memo revoking the post-pandemic do whatā€™s best for you dispensation and telling staff to be in the office at least three days per week. He told staff, ā€˜itā€™s easier to learn, model, practice, and strengthen our culture when weā€™re in the office together most of the timeā€™. An Amazon company Slack channel intended to help staff organise against the move has gained 16,000 members.

    šŸ¤– A research team at Chinaā€™s Fudan University released a rival to ChatGPT. The generative AI chatbot, called MOSS, quickly went viral on Chinese social media and crashed under a flood of users. Meanwhile, the CCP is working to restrict access to ChatGPT, which state media has called a tool for the US to ā€˜spread false informationā€™.

    šŸ›° Starlink is testing a new ā€˜global roamingā€™ internet service. The plan will cost users $200 a month. The company has over 3,500 satellites in orbit, with plans to launch thousands more.

    ā›µļø A new US Navy ship can operate autonomously at sea for 30 days. The Expeditionary Fast Transport USNS Apalachicola is 337 feet long, making it the largest autonomous ship in the Navyā€™s fleet; experts say it could be used as a roaming platform for the launch of missiles or drones.

    šŸ¤Æ An AI taught a pretty good human Go player to beat the worldā€™s best AI Go player. Kellin Pelrine, a US citizen and amateur player, used tactics devised by a computer to beat the top-ranked Leela Zero system. Back in 2016 DeepMindā€™s AlphaGo made headlines when it became the first AI to beat then world Go champion Lee Sedol.

    āš–ļø The US Supreme Court is set to examine a federal law that underpins social media as we know it. Section 230 states that internet sites are not responsible for the content posted on them by users; in other words they are platforms and not publishers. Now, the Court is set to hear arguments on two key cases concerning social media content moderation; their ruling could have huge implications for Section 230 and the future of the internet.

    šŸ™ Saudi Arabia wants to build a gigantic hollow-cube skyscraper that will house holographic worlds. The Mukaab will be the centrepiece of a new district of the Kingdomā€™s capital city, Riyadh, and the government is calling it ā€˜the worldā€™s first immersive destinationā€™ offering a range of virtual experiences, including a taste of what it would be like to live on Mars.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,018,579,711šŸŒŠ Earths currently needed: 1.7988619512

    šŸ’‰ Global population vaccinated: 64.0%

    šŸ—“ļø 2023 progress bar: 15% complete

    šŸ“– On this day: On 24 February 1920 Nancy Astor becomes the first woman to speak in the UKā€™s House of Commons, after her election to Parliament three months earlier.

    Swimming in Infinity

    Thanks for reading this week.

    The collision between generative AI and legacy media will do much to shape the hall of mirrors we live inside across the coming years. Itā€™s yet another case of new world, same humans.

    This newsletter will keep watching, and working to make sense of what it all means for our shared future. And thereā€™s one thing you can do to help: share!

    If this weekā€™s instalment resonated with you, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    This newsletter is billed a mid-week update. Here, for once, is an instalment that arrives in the middle of the week.

    In this edition? A new study suggests one third of US citizens would use safe and affordable gene editing to create more intelligent children.

    Meanwhile, a prestigious London law firm wants to hire someone who can whisper sweet legalese to ChatGPT.

    Letā€™s get into it.

    šŸ§  Edit button

    This week, a startling glimpse of a coming ideological battle. One that will force us to confront the very meaning of the word human.

    New research reveals that almost one third of US citizens say theyā€™d use gene editing to create a more intelligent offspring.

    Published in the journal Science this week, the study asked respondents if theyā€™d use embryo selection and/or gene editing technologies to create children who are smarter and more likely to get into a top-ranked college. The respondents were told to imagine that these techniques are free and safe (neither of which is currently true).

    A full 38% said theyā€™d use embryo selection. And 28% said theyā€™d use gene editing.

    The understated conclusions of the study authors (PGT-P refers specifically to embryo selection):

    ā€˜Our data suggest that it would be unwise to assume that use of PGT-Pā€”even for controversial traitsā€”will be limited to idiosyncratic individuals, or that it has little potential to cause or contribute to society-wide changes and inequities.ā€™

    In other words: gene-edited humans may be just around the corner, so get ready for some seriously weird and terrifying implications.

    Itā€™s just over ten years since the breakthrough ā€” led by scientists Emmanuelle Charpentier and Jennifer Doudna ā€” that brought us CRISPR gene editing. Last month Science ran a retrospective that also looked to what the next decade may bring:

    As the Science retrospective made clear, weā€™re entering an era of CRISPR-fuelled medical interventions. The idea that we may one day engineer babies to be smarter ā€” or physically stronger, or more creative ā€” is no longer far-fetched.

    And the data in this new study suggests many will embrace such a future. We should probably be talking more about what this means.

    āš” NWSH Take: Chinese scientist He Jiankui reemerged into the scientific community this week after a three-year spell in prison courtesy of the CCP. Speaking to the Guardian before an appearance in the UK, he conceded that heā€™d ā€˜acted too quicklyā€™ when in 2018 he created the worldā€™s first babies with edited genomes. His work prompted rapid and near-universal condemnation. But 28% of the US citizens surveyed by this study just said, in so many words: sure, Iā€™d gene edit my baby if it meant she had a better chance of getting into Harvard. // You might counter that 28% is still a clear minority. But a world in which one in four babies ā€” or even a fraction of that ā€” are genetically engineered for greater intelligence is a world profoundly reordered. Weā€™re some way from this kind of targeted genetic intervention right now. But the pace of innovation here, and the Science study, suggest we should start thinking about the implications. // What second and third order effects occur when, for example, an economic elite can access genetic engineering tech that others canā€™t? We talk a lot about the ways in which the internet created winner takes all models that made inequality worse. But what about this? Itā€™s not enough simply to say weā€™ll outlaw these practises. Rich people will find a jurisdiction that caters to them: intelligence tourism. This newsletter will keep watching.

    āš–ļø Prompt justice

    Iā€™ve written a great deal across the last few months about generative AI. This week, a clear signal that the revolution is set to impact the real economy, and the professions, in myriad ways.

    The prestigious British law firm Mishcon de Reya advertised for a GPT Legal Prompt Engineer:

    ā€˜With the release of ChatGPT signalling a new phase of widespread access to LLMs, we are looking to increase our understanding of how generative AI can be used within a law firm, including its application to legal practice tasks and wider law firm business tasks.ā€™

    The selected candidate will work with Mishcon lawyers to ā€˜design and develop high-quality prompts for a range of legal and non-legal use cases, working closely alongside our data science team.ā€™

    Last week I wrote on the way ChatGPT has sparked a war for the future of search. Amid that, it looks as though law firms are about to fight their own battle of the prompts.

    āš” NWSH Take: Itā€™s not hard to imagine how LLMs will prove useful at Mishcon HQ. Case notes on complex trials can run to thousands of pages; now ChatGPT can summarise all that text in seconds. Meanwhile, think about the potential for the development and testing of arguments and counter-arguments. // The broader point here? Thereā€™s much talk of the ways in which ChatGPT and its offspring will automate away jobs and render human creativity obsolete. I suspect the reality will be more complex. And part of that reality? Prompt writing ā€” that is, whispering to generative models in order to get the best outputs from them ā€” is set to become a creative mode all of its own. Far from erasing writers, generative models are causing the emergence of a whole new form of writing; itā€™s about to be an amazing time for those with an aptitude for words. // Sure, itā€™s unlikely that writing prompts for Mishcon will be anyoneā€™s idea of creative heaven. But this is just the start. New art forms will grow out of this new form of writing. How long, for example, until we see entire short stories that function as prompts for an LLM, so that the model can create an interactive world for the reader to explore? NWSH will keep watching ā€” and may even launch an experiment or two of its own.

    šŸ—“ļø Also this week

    šŸ¤” Users claim that Microsoftā€™s new ChatGPT-fuelled Bing search engine is becoming spiteful and rude. Feedback from the first wave of testers include responses in which the chatbot claimed to be sentient, and one in which it asked its user, ā€˜Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?ā€™ Iā€™m going on record here: Iā€™m sceptical that some of these responses are real. I think Microsoft have some pranksters on their hands. Meanwhile, Microsoft permanently killed Internet Explorer this week, after 27 years of, letā€™s be honest, variable service.

    šŸ Anti-ageing scientists used young blood plasma to extend the age of the worldā€™s oldest lab rat. Scientists at US startup Yuvan Research say blood therapies of this kind may be able to ā€˜rewind the clockā€™ on human lifespan ā€” but more evidence is needed.

    šŸ›’ Amazonā€™s CEO says the retail giant plans to ā€˜go bigā€™ on physical stores. Speaking to the Financial Times, Andy Jassy said: ā€˜weā€™re hopeful that in 2023, we have a format that we want to go big on, on the physical sideā€™. The company recently announced that it will lay off more than 18,000 workers.

    šŸ’ø News aggregation and comment platform Reddit wants to IPO later this year. Thatā€™s according to technology publication The Information.

    šŸ™Š Audiobook narrators say they fear Apple is using their work to train synthetic voices. Some narrators say they have only just become aware of a clause in their contract that allows the tech giant to ā€˜use audiobooks files for machine learning training and modelsā€™. Back in New Week #110 I wrote about UK-based startup ElevenLabs and its eerily good text-to-voice model.

    šŸŖ NASAā€™s Curiosity rover has found the ā€˜clearest evidence yet of an ancient lake on Marsā€™. At the foothills of a Martian mountain the rover discovered rocks etched with what appear to be the marks left by flowing water. If a lake did exist on Mars, it raises the probability that the planet was once home to microbial life forms.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,016,948,535šŸŒŠ Earths currently needed: 1.7983081097

    šŸ’‰ Global population vaccinated: 63.8%

    šŸ—“ļø 2023 progress bar: 12% complete

    šŸ“– On this day: On 15 February 1946 the worldā€™s first electronic general-purpose computer, ENIAC, is launched at the University of Pennsylvania, Philadelphia.

    Next Human

    Thanks for reading this week.

    The human impulse towards self-enhancement ā€” towards the transcension of physical, intellectual, and emotional limits ā€” is eternal. Now, that impulse is colliding with powerful new technologies of genetic manipulation.

    Via those technologies, are we about to see the emergence of fundamentally new kinds of human beings? What then for the thought that frames this newsletter: new world, same humans?

    Iā€™ll keep watching. And thereā€™s one thing you can do to help: share!

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    I was away for most of this week, and that means a truncated instalment of the newsletter.

    But that gives me a chance to dive a little deeper than usual into a single story. Which works out well, because this week saw the first shots fired in what is set to become an epic battle for the future.

    Letā€™s get into it.

    šŸ” In search of answers

    This week, three glimpses of the revolution taking place via large language models ā€” in particular, into its implications for online search.

    Microsoft announced a new version of its search engine, Bing, which it billed an ā€˜AI copilot for the webā€™.

    The all-new Bing is powered by OpenAIā€™s GPT-3.5; its most notable new feature is a ChatGPT-style conversational capability, which responds with narrative answers to enquiries such as help me find a pet or I need to write a pop music trivia quiz.

    Microsoft have long funded OpenAI, and announced a new $10 billion investment in January.

    Currently only a limited preview version of the new Bing is available; still, via some pre-set enquiries it offers a taste of the way the platform integrates ChatGPT with traditional search:

    Meanwhile, Google announced the coming release of their own AI conversational agent.

    The tool, Bard, is built on Googleā€™s large language model LaMDA, and in a blog post Google announced that it was being released to a small cadre of expert testers this week in anticipation of a broader release soon.

    But the announcement quickly hit a snag: observers pointed out that in a promotional video to support the announcement Bard incorrectly reported that the James Webb telescope took the first pictures of a planet outside our solar system. It didnā€™t.

    Whatā€™s more, at the launch event on Wednesday morning Google said nothing about how the chat tool will be integrated into its broader search service.

    The market response? Shares in Googleā€™s parent company, Alphabet, fell by 9% ā€” wiping $100 billion off the market value.

    Finally, cunning users found a way to subvert the content moderation policies imposed on by OpenAI on ChatGPT; those policies are intended to stop the chatbot generating responses some consider harmful or offensive.

    Check out this Reddit thread for the full story of this method and its history. But essentially it entails writing a prompt that asks ChatGPT to emulate another AI called DAN ā€” it stands for Do Anything Now ā€” which is not subject to any content moderation.

    Thatā€™s it: you just ask. That done, ChatGPT-DAN will go wild at your behest, spewing out hateful statements and generating conspiracy theories to order. Here is a relatively mild taste:

    Itā€™s everything OpenAI donā€™t want associated with their new superstar creation. They are no doubt working hard, as I write, to patch up this glitch.

    āš” NWSH Take:

    Both Microsoft and Google are keen to stress how responsible theyā€™re being when it comes to generative AI. Weā€™re putting safeguards around this technology, they keep telling us. Weā€™re releasing it gradually, so we can monitor the impacts.

    But donā€™t let all the corporate ethics-speak fool you: via the revolutionary power of LLMs, these two tech giants are now at war for the future of search. Each is racing to outdo the other, and theyā€™re not going to let up.

    At stake? One of the biggest prizes in existence: win search, and you get to be the lens via which humanity views its collective knowledge and shared cultural history. Achieve that, and you can shape and profit from countless online behaviours and innovations built on top of your platform. Google built a $1 trillion business on these truths.

    Now, Microsoft is coming for that business. Even the limited taste of new Bing currently available makes clear the way that high-quality conversational AI can be an era-defining phase shift for search. Itā€™s a whole new way of communing with knowledge.

    *

    Googleā€™s lead in search is currently overwhelming: it has around 84% of the market, while Microsoftā€™s Bing is in distant second place with 9%.

    But I wonder how many inside Google right now are recalling the story of another tech giant: Nokia. Back in 1998 the Finnish company commanded 40% of the global mobile phone market. Theyā€™d helped pioneer the first-wave mobile revolution, and their domination seemed unbreakable. Then came 2007, and the iPhone.

    Thereā€™s much about that story that is unrepeatable. This isnā€™t the late 90s; Google isnā€™t Nokia. But itā€™s a reminder that the seemingly unbeatable can be beaten. And, more concerningly for Google, for everyday users the arrival of ChatGPT carries with it echoes of the arrival of the iPhone 16 years ago: oh s**t, Iā€™ve never used anything like this before. This feels like itā€™s from the future.

    In the announcement of Bard, itā€™s hard not to hear whispers of an organisation somewhat spooked by whatā€™s happening.

    And now itā€™s clear that the markets, too, believe that everything is up for the taking. A 9% share price dive all because Bard spat out a factoid; it seems a mad over-reaction. But if it helps drive a narrative that Microsoft are winning the generative search war, it may become a self-fulfilling prophesy.

    But that war is still in its early days. Sure, Bard made an error on the James Webb telescope ā€” though now an argument rages over whether it was, in fact, wrong ā€” but ChatGPT is prone to produce factual inaccuracies and even errors on basic arithmetic.

    These problems are being solved; via iterative releases, ChatGPT is already more factually reliable than it was a month ago.

    Thereā€™s a long, long way to go. Search is about to be ripped up and put back together again, and itā€™s going to be fascinating.

    *

    But there are issues in play, here, that go even deeper.

    Weā€™re still at the start of any attempt to understand what these LLMs really are, how we should relate to them, and how theyā€™ll change our lives.

    One angle on all that? Iā€™ve argued before that LLMs such as GPT 3.5 and LaMDA are best understood as a new instantiation of the human hivemind. These AIs can take in everything weā€™ve got ā€” an appreciable amount of all the text on the internet, say ā€” and create novel syntheses and remixes of their own. They are less a straightforward digital tool, and more a window ā€” onto our shared intellectual and cultural history, on to the collective consciousness.

    Seen this way, we may come to view generative AI as a shift comparable to others that profoundly changed our relationship with knowledge. The arrival of the printing press. The invention of the internet.

    These are bold claims. They deserve all the scepticism they will attract. All we can try to do is make sense of whatā€™s happening in real-time.

    Iā€™ll be publishing a short note soon that seeks to dive further into all this. In particular, into why using ChatGPT feels such a particular and new kind of experience.

    A sneak peak: I think itā€™s to do with the way human thought itself is, by its nature, a dialogue. That is, with the way thought is a form of talking to ourselves.

    šŸ—“ļø Also this week

    šŸŒ Climate activists are suing Shellā€™s board of directors over global heating. Environmental law charity ClimateEarth say Shellā€™s 11 directors have breached their legal responsibility under the UK Companies Act because Shellā€™s climate strategy does not align with the Paris Agreement.

    šŸ° Disney says it will lay off 7,000 employees as it struggles with a slowdown in subscriptions to its streaming service. Around 46 million people subscribe to Disney+. But the companyā€™s direct-to-consumer division, which includes the streaming service, reported an operating loss of $1.1 billion across the last quarter of 2022.

    šŸ™Š Voice actors say theyā€™re facing new contracts that ask them to sign the rights of their voice away to AI. Last week I wrote about ElevenLabs, the UK startup behind a next-level generative voice tool. Meanwhile, music producer David Guetta used an AI voice clone of Eminem in a new song.

    šŸš€ SpaceX tested the most powerful rocket system ever built. The ā€˜static testā€™ took place at SpaceXā€™s base in Texas; it saw 31 of Starshipā€™s 33 engines fired. The rocket system is twice as powerful as NASAā€™s Artemis, and Elon Musk says it could help carry humans to Mars.

    šŸ’ø The Bank of England says the UK may one day need a ā€˜digital poundā€™. A new consultation paper says the new ā€˜retail central bank digital currencyā€™ would be issued by the Bank and could be used by households and businesses as an everyday form of payment.

    šŸ‘¶ The CCP wants Chinese local leaders to boost the birth rate. A senior health official called on leaders to ā€˜make bold innovationsā€™ to encourage more births, including moves to lower the cost of childcare and education. Last year saw the lowest birth rate on Chinese records, at 6.77 births per 1,000 people.

    šŸŒ– Scientists say we could tackle global warming by shooting moondust into space. The new study, from the Harvard-Smithsonian Center for Astrophysics, explores the idea of using a powerful canon to fire lunar dust fired from the Moonā€™s surface into space. If positioned between the Earth and the Sun, say the researchers, the dust could act as a heat shield that helps to lower global temperatures. I explored geoengineering of this kind in more detail back in NWSH #60.

    Chat Lines

    Thanks for reading this week.

    We are the creature that talks. Now, weā€™ve built machines that can talk back. Itā€™s yet another chapter in the long story that is new world, same humans.

    Itā€™s clear that weā€™re setting out on a road that will take us to new and alien places. This newsletter will try to make sense of the journey.

    If this weekā€™s instalment struck a chord, please consider forwarding the email to someone whoā€™d also enjoy it. Or share this across one of your social networks, with a note on why you found it valuable. Remember, the larger and more diverse the New World Same Humans community becomes, the better for all of us!

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Greetings from London! Weā€™ve just seen seven days containing plenty of fuel for the NWSH fire.

    This week, a UK-based startup release an amazingly good AI text-to-voice tool. Until all hell breaks loose, and they promptly unrelease it.

    Meanwhile, new research suggests 1.5C of global heating is coming sooner than we thought. And DHL turn to Boston Dynamics to solve their labour shortage woes.

    Letā€™s get started.

    šŸŽ¤ Voice control

    This week, further glimpses into the halls of mirrors taking shape around us via generative AI.

    UK-based voice technology startup ElevenLabs launched a new text-to-speech model that generates eerily pitch-perfect, human-sounding voices. Hereā€™s a snippet:

    I listen to a lot of audio books. To my ear, the voice reading Gatsby above sounds indistinguishable from those of the handful of actors ā€” male, American, blessed with a soothing voice ā€” who narrative most of them.

    Whatā€™s more, the tool allows anyone to create a highly convincing voice clone in seconds, simply by uploading a few short clips of the voice they want to recreate.

    And thatā€™s what caused all the trouble this week. Within days, people had used the tool for all kinds of mischief, including using a voice clone of actress Emma Watson to read passages from Mein Kampf, and sending a cloned Ben Shapiro on a racist rant about Alexandria Ocasio-Cortez. Much of this content was shared on the infamous trollā€™s paradise that is 4Chan.

    Three days after launch ElevenLabs withdrew free access. Theyā€™re now restricting access to the ā€˜build your own cloneā€™ feature to paid users, and say theyā€™re working on a tool that will allow for the near-instant detection of AI-generated voices.

    The announcement echoed one made this week by The Big Player in generative AI:

    OpenAIā€™s new tool will allow users to identify text written by a generative model, including by GPT-3.

    This week, OpenAI announced that ChatGPT has hit 100 million users just two months after launch. The vast popularity of the tool has led to speculation that the internet is about to be hit by a tsunami of AI-generated junk content and disinformation.

    āš” NWSH Take: The ElevenLabs story is a signal of the potent difference between really good and perfect when it comes to generated/deepfake content. Just a few months ago publicly text-to-voice tools were generating voices that sounded good, but a little robotic. ElevenLabs elevated fidelity to perfect; cue the spectre of a million convincing celebrity says hateful things fakes. // No wonder, then, that AI detection tools are about to become big business. Right now, these tools are in their infancy. Pretty soon, internet browsers will come with AI detection as standard. // The broader message here? New forms of generated content ā€” including voice clones ā€” are about to transform media and entertainment. Back in New Week #100 I wrote on how an AI will voice Darth Vader in Disneyā€™s Obi-Wan Kenobi series; this week brought news that AI startup Metaphysic ā€” best-known for their viral Tom Cruise deepfakes ā€” will deploy its technology to make Tom Hanks appear younger in his next film. How long before a Hollywood film uses AI to reincarnate a much-loved star who is no longer with us? // But it wonā€™t only be Hollywood and media giants that leverage generated media; new tools will mean new creative possibilities for all of us. One glimpse? Check out this person who automated the creation of a personalised podcast; he uses ChatGPT to collect and summarise stories on topics of interest, and ElevenLabs to read out the summaries using a clone of his own voice.

    šŸŒ‡ Only adapt

    Research published this week argues that weā€™re going to exceed the 1.5C global warming target far sooner than most people believe.

    Produced by scientists at Stanford University, the study used AI to analyse recent temperature changes around the world. It concluded that weā€™ll exceed 1.5C some time in the early 2030s, no matter what happens to greenhouse gas emissions in the intervening period.

    Perhaps more alarming, though, is the paperā€™s prediction when it comes to 2C of warming.

    The model found that if reaching net zero emissions takes another 50 years, then it is likely that 2C will be exceeded. This runs counter to the mainstream view, recently expressed by the Intergovernmental Panel on Climate Change, that weā€™ll stay below 2C if we can reach net zero by 2080.

    Lead researcher Noah Diffenbaugh said: ā€˜net-zero pledges are often framed around achieving the Paris Agreement 1.5 C goal. Our results suggest that those ambitious pledges might be needed to avoid 2 C.ā€™

    āš” NWSH Take: Itā€™s been said before in this newsletter: the 1.5C target is toast. Weā€™re already at 1.1C, and the pledges that were meant to keep us below 1.5C are not being met. Now comes news that those pledges probably wonā€™t keep us below 1.5C anyway. // The answer, here, insofar as there is one? Itā€™s about adaptation. This week also saw a report from the UKā€™s Climate Change Committee ā€” which advises government on warming ā€” that the UK is ā€˜chronically underspendingā€™ when it comes to adaptation; investment of Ā£10 billion a year is needed, said the report, to prepare for the uptick in storms, floods, and heatwaves that is coming. Also see mounting evidence for the effectiveness of direct cash transfers to poorer countries to help them adapt quickly to an imminent storm or flood. // In short, we need to continue our attempts to mitigate future climate change, while also doing more to adapt to the change thatā€™s already unavoidable. That presents multiple challenges, but one is a challenge of collective psychology: can we accept that things are already quite bad, without giving up on our attempts to stop them getting even worse?

    šŸ¤– Go bot

    Robots are coming to a workplace near you; this week saw glimpses of what is ahead.

    Logistics giant DHL announced that theyā€™re now using the Boston Dynamics robot known as Stretch to unload trucks at one of their warehouse sites.

    The announcement is no surprise: DHL contributed to the conception and testing of Stretch, and in 2022 they became the first commercial customer for the robot.

    Because it involves lifting variable weights and navigating complex environments, the unloading of boxes from trucks is still typically undertaken by human workers. Stretch can unload around 350 boxes an hour, or one every 12 seconds ā€” thatā€™s far faster than a human.

    DHL say theyā€™ve been dealing with a pandemic-induced labour shortage in recent years, combined with an ongoing surge in the sending of small packages caused by online shopping. The company plans to install Stretch robots at further sites around the US soon.

    But DHLā€™s global digital transformation officer for Supply Chain, Sally Miller, says DHL warehouse workers have nothing to fear. The advent of robotics, she says, will simply make their job easier and more fun: people who used to unload trucks ā€˜can do something else that is less labour intensive and more enjoyable and value addedā€™.

    āš” NWSH Take: Who knows whether DHLā€™s Sally Miller believes what sheā€™s saying? And sure, the story of worker displacement here is more complex than simply robots in, humans out. After all, people will be needed to tend to all those machines. But letā€™s be real. The advent of Stretch and similar robots isnā€™t going to bring about a renaissance of creativity and ā€˜value addā€™ for warehouse workers; itā€™s going to see people shunted out of jobs. A lot of people. // DHL Supply Chain employs 165,000 people, many of them in warehouses. But thatā€™s just the start. Back in New Week #88 I wrote on the speed at which Amazon is deploying robots; this week star technology investor Cathie Wood, CEO of ARK Invest, predicted the retail giant will have more robots than humans in its warehouses by 2030. Amazon employs around 1.6 million people worldwide, most in its warehouse and distribution network; Wood reckons the company is adding 1,000 robots a day. // The upshot? The dynamics of the labour market are about to be upturned by AI and robotics. Big corporations donā€™t want to admit it, and politicians donā€™t want to talk about the implications. But a reordering is ahead, and weā€™ll need new social and economic settlements to deal with it.

    šŸ—“ļø Also this week

    šŸ“± A member of the US Senate Intelligence Committee called on Apple and Google to ban TikTok from their app stores. Colorado Democratic Senator Michael Bennet said Chinese oversight of the service makes it ā€˜an unacceptable threat to the national security of the United Statesā€™. Amid mounting calls for action, TikTok CEO Shou Zi Chew will testify before Congress this month.

    šŸ™Š Energy firm Shell has been dramatically overstating their spending on renewable energy. Activist group Global Witness says a division of the company called Renewables and Energy Solutions spends most of the money diverted to it on gas. Shell this week announced record profits of Ā£33.1 billion for 2021.

    šŸŽØ Netflix used generative AI to create backdrops for a new animated short. Dog and Boy is a three-minute animated film about a boy and his robot; Netflix cited labour shortages to explain its decision to use AI-generated artwork.

    šŸ‘“ A leading anti-ageing scientist says he believes the first person to live to 150 has already been born. David Sinclair is the scientist behind the information theory of ageing; I wrote about experimental breakthroughs in his work in New Week #109 last week.

    šŸ“ŗ A Twitch user created an AI-generated version of the 1990s sitcom Seinfeld intended to stream continuously and forever. The show ā€” called Nothing Forever ā€” streams new content 24/7, with a script generated by GPT-3. And while itā€™s not actually funny, it is weirdly compelling viewing.

    šŸ‘Øā€šŸ’» Chinese tech giant Baidu say theyā€™ll soon launch a ChatGPT-style chatbot of their own. The company say theyā€™ll incorporate the technology into their search engine.

    šŸšØ A Dutch hacker acquired and tried to sell the personal data of nearly every Austrian citizen. Austrian police say the hacker obtained the full name, address, and date of birth of almost all of the countryā€™s 9.1 million citizens, before offering the database for sale in an online forum.

    šŸ‘¾ An international team of astronomers using AI to search for aliens say they have promising leads. The team are using AI to comb through a vast number of radio signals collected by the Green Bank Telescope in West Virginia. They say theyā€™ve so far identified eight signals that suggest an intelligent origin, and point to AI analysis as a new and highly effective tool in the search for life beyond Earth.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,014,736,045šŸŒŠ Earths currently needed: 1.7975568773

    šŸ’‰ Global population vaccinated: 63.8%

    šŸ—“ļø 2023 progress bar: 9% complete

    šŸ“– On this day: On 3 February 1913 the Sixteenth Amendment to the United States Constitution is ratified; it allows the Federal government to impose and collect an income tax.

    Hear Me Now

    Thanks for reading this week.

    In 1985 the media theorist Neil Postman published Amusing Ourselves to Death. Entertainment, he said, was becoming the lens via which citizens of the west make sense of the world around them, and their own lives.

    Now, the Republic of Entertainment that Postman foresaw is set to be transformed by AI generated content, which will propel us deeper into the realms of the representation-as-real, or the hyper-real. If only Postman was still around to tell us what to make of that.

    This newsletter will keep up its own attempts to make sense of it all. And thereā€™s one thing you can do to help: share!

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    After an extended break, itā€™s the first New Week update of 2023!

    This week, media giants are waking up to the IP implications of generative AI. The TL;DR? Theyā€™re not happy, and a mighty legal battle is brewing.

    Meanwhile, a US startup is planning to become the first private organisation to mine asteroids and bring the minerals back to Earth. And a Harvard longevity doctor says he has uncovered one of the key mechanisms that governs human ageing.

    Letā€™s get into it.

    šŸ¤– Politics chat

    Generative AI is an earthquake with implications weā€™ll be forced to contend with in the years ahead. This week saw powerful signals of what is coming.

    Getty Images announced that they will sue Stability AI, the company behind text-to-image platform Stable Diffusion. The media giant, which owns more than 135 million copyrighted images, says Stability AI unlawfully scraped their IP in order to help train its model.

    The company isnā€™t seeking financial damages, says CEO Craig Peters. Instead, Peters talks about the establishment of a new business models; by way of comparison he cites the wave of illegal music streaming sites that enjoyed huge popularity in the early 2000s, but that eventually gave way to legal streaming services:

    ā€˜I think there are ways of building generative models that respect intellectual property. I equate this to Napster and Spotify. Spotify negotiated with intellectual property rights holders ā€” labels and artists ā€” to create a serviceā€¦And thatā€™s what weā€™re looking for, rather than a singular entity benefiting off the backs of others.ā€™

    Getty is bringing its action in the UK. A spokesperson for Stability AI said the company will defend itself, and that the suit is based on ā€˜a misunderstanding of how generative AI technology works and the law surrounding copyrightā€™.

    This move comes in the wake of news that three visual artists will sue both Stability AI and Midjourney. Their class action lawsuit claims the platforms ā€˜violated the rights of millions of artistsā€™ by using their work as training data.

    Puerto Rican artist Karla Ortiz is one among the three bringing the case:

    Meanwhile, artists are developing tools that enable them to check whether their work was used to train a popular text-to-image model.

    āš” NWSH Take: Generative AI is about to smash into a complex mesh of social systems that are woven though the economy, the world of work, creative practises, and more. And as if to underline that truth, OpenAI CEO Sam Altman was in Washington DC this week to talk to policymakers. // What they made of his message ā€” which reportedly included explanations that OpenAI is working towards AGI ā€” remains unclear. After all, policymakers across the Global North are still struggling to come to terms with web 2.0, almost 20 years after its emergence. Analysts will watch the Getty lawsuit closely for hints on how the IP question is set to play out. But thatā€™s just the start. What about generative AIā€™s impact on disinformation? Or employee displacement? Or our education systems: news broke this week that ChatGPT passed law exams in four courses at the University of Minnesota. How do we legislate for that? // The fundamental problem: AI and other technologies are evolving at a speed that our societies canā€™t adapt around. Weā€™ve been talking about an online wild west for years, but the current dispensation will come to seem quaint given what is coming. One potential answer? In time, we may have no choice but to turn to AI to help us devise new laws and norms that enable us to cope with this technological disruption. The rise of AI, then, may necessitate governance by AI. Thatā€™s a mind-bending idea that NWSH will come back to soon.

    Update: just as Iā€™m hitting send comes news that Google have released an insanely good text-to-music model. See Also this week, below, for further details. But clearly the IP questions currently swirling around generative AI and the visual arts will soon becoming to music, too.

    šŸŒŒ Space drills

    A US startup, AstroForge, this week announced that it will launch two space mining missions in 2023:

    AstroForge say they want to become the worldā€™s first commercial company to mine an asteroid and bring the minerals back to Earth.

    The first mission of 2023, planned for April, will see AstroForge refining technology tested aboard a SpaceX Falcon 9 spacecraft.

    And the second, later in the year, will see the startup piggyback on another Falcon 9 ā€” this one headed for the Moon. An AstroForge probe will travel to lunar orbit along with the spacecraft, before heading out into deep space on its own to take hi-res images of the asteroid that AstroForge eventually wants to mine.

    āš” NWSH Take: Space mining has been a mainstay of science-fiction for decades, and was the subject of a wave of hype a few years back. Now, via the maturation of the private space startup ecosystem, itā€™s coming. // And itā€™s going to be wild. Want a glimpse of the prizes in play? NASA say that this year theyā€™ll launch a mission to the asteroid 16 Psyche; the 140 mile wide object is believed to contain a core of iron, nickel and gold worth $10,000 quadrillion. Thatā€™s around 70,000 times the size of the global economy. // Of course, weā€™d need to get all that nickel and gold back to Earth to sell it. And thatā€™s where startups such as AstroForge come in. On the other hand, though, do we have to get it back to Earth? I canā€™t help wondering: if people come to believe that these minerals will one day be recoverable, will that fuel the financialisation of these asteroids? Will people start selling shares in them, or taking huge loans against them? What will that do to the global financial system? NWSH will keep watching.

    šŸ§’ Department of youth

    Developments this week in our eternal quest for the secrets of immortality.

    Scientists at the University of Bristol say theyā€™ve used gene therapy to ā€˜rewindā€™ the biological age of the heart in elderly mice.

    The research, published in the journal Cardiovascular Health, studied the impacts of a gene mutation often found in centenarians, and believed to help protect against heart disease. Researchers in the UK and Italy found that when the gene was administered to elderly mice, it fuelled processes of repair that resulted in the heart health of a younger mouse ā€” equivalent to a decade younger in human terms.

    The paper comes after news last week of a major ageing breakthrough. A 13-year study conducted by Harvard genetics professor David Sinclair seems to confirm Sinclairā€™s information theory of ageing.

    Currently, mainstream scientific opinion is that the accumulation of mutations in DNA is the primary driver of ageing. Sinclair, though, has long believed that the real culprits are errors that appear over time in the information carried in the epigenome. This information is used to instruct cells on which genes to activate and which to keep silent; but over time, says Sinclair, the instructions get jumbled, and the result is the cell dysfunction we call ageing.

    Sinclairā€™s new study suggests he is (at least in part) right. And thatā€™s huge, because it raises the possibility that we can repair the epigenetic instructions ā€” Sinclair likens this to ā€˜rebooting the epigenomeā€™ ā€” and so literally unspool the ageing process. When Sinclair and his team gave gene therapy to mice that repaired the information in their epigenome, the result was the production of far more youthful cells. Sinclair says:

    ā€˜Now, when I see an older person, I donā€™t look at them as old, I just look at them as someone whose system needs to be rebooted. Itā€™s no longer a question of if rejuvenation is possible, but a question of when.ā€™

    āš” NWSH Take: This week it was impossible to avoid headlines about Bryan Johnson, a 45-year-old Silicon Valley founder and Very Rich Person who spends $2 million a year on a regime ā€” including constant blood tests and thousands of whole-body MRIs ā€” intended to rewind his biological age to 18. Sure, thatā€™s extreme. But Johnson is questing at the outer edges of a pursuit ā€” extended youthfulness ā€” that interests almost all of us. // In 2023, weā€™re going to hear a lot more about it. Sinclairā€™s research offers a whole new angle on anti-ageing therapies. Meanwhile, work that targets ageing is becoming increasingly mainstream and well-funded. Iā€™ve written before on Jeff Bezos-funded Altos Labs, which now has a $3 billion war chest. Pharma giant Pfizer this month announced a drug discovery partnership with longevity startup Gero. And scientists at New Yorkā€™s Albert Einstein College of Medicine are planning a huge study on the hypothesis that the common (and cheap) diabetes drug metformin can safely extended human lifespan by years. // Exciting advances; huge unanswered questions. Not least: what will extended lifespan to do already strained social and welfare systems in the Global North?

    šŸ—“ļø Also this week

    šŸš€ NASA says it will partner with the Defense Advanced Research Projects Agency (DARPA) to develop a nuclear thermal rocket engine. The Agency says the engine could one day enable humans to journey deep into space. They are aiming to have a prototype ready no later than 2027.

    šŸ¤– An Amazon engineer asked ChatGPT a series of standard interview questions for a coding job at the company, and it got them all right. The machine learning engineer revealed details of the experiment in the company Slack. Meanwhile, Amazon has warned employees not to share commercially sensitive information with the chatbot.

    šŸŒ³ A new study says human activity may have degraded far more of the Amazon rainforest than previously believed. Scientists at Lancaster University in the UK say logging, land conversion and more has weakened more than 2.5 million square kilometres of the rainforest; thatā€™s around one third of its area, and double the area previously thought to have been affected.

    šŸŸ US scientists used CRISPR to put an alligator gene inside catfish. The gene makes the catfish more resistant to infection, which is a major problem during catfish farming. US farms produce 307 million tonnes of catfish each year.

    šŸ›° SpaceX has agreed to work with the US National Science Foundation to mitigate the impacts of its satellites on our view of the night sky. Astronomers have long complained that SpaceX satellites ā€” the company plans to launch tens of thousands ā€” will impair their work. Regular readers already know that this subject is a longterm NWSH obsession.

    šŸ˜± The World Economic Forum says a ā€˜catastrophic cyber eventā€™ is likely some time within the next two years. Speaking at Davos, WEF managing director Jeremy Jurgens said that 93% of cyber leaders surveyed by the organisation believe a cyber catastrophe is coming soon; thatā€™s a far higher proportion, said Jurgens, than seen in previous years.

    šŸ¤Æ And just as Iā€™m hitting sendā€¦Google have announced a new text-to-music model that blows away previous attempts at generative music. The model, called MusicLM, can generate long and complex compositions based on only a text description. Go here and listen to, among others: Epic soundtrack using orchestral instruments. The piece builds tension, creates a sense of urgency. An a cappella chorus sing in unison, it creates a sense of power and strength.

    šŸŒ Humans of Earth

    Key metrics to help you keep track of Project Human.

    šŸ™‹ Global population: 8,013,469,158šŸŒŠ Earths currently needed: 1.7971267236

    šŸ’‰ Global population vaccinated: 63.8%

    šŸ—“ļø 2023 progress bar: 7% complete

    šŸ“– On this day: On 27 January 1820 a Russian expedition led by naval officer Fabian Gottlieb von Bellingshausen discovers the Antarctic continent.

    Itā€™s Magic

    Thanks for reading this week.

    The generative AI revolution is unfolding at what feels like breakneck speed. Googleā€™s new music model is, at first listen, amazing. Iā€™ll write more on it next week, or maybe sooner in the Slack group.

    Weā€™re all going to have to figure out the consequences of these new technologies and how we propose to live with them. Itā€™s another case of new world, same humans.

    This newsletter will keep watching. And thereā€™s one thing you can do to help: share!

    Now youā€™ve reached the end of this weekā€™s instalment, why not forward the email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Here we are again, at the start of it all. Well, almost; I canā€™t believe weā€™re already 22 days into 2023.

    I took something of an extended break from the newsletter over the holidays. But now NWSH is back, and a whole new year lies ahead of us.

    In London weā€™re amid another cold snap; close to zero degrees during the day. And across the northern hemisphere winter, nature is in frozen stasis. Itā€™s a familiar yet always ā€” to me, at least ā€” strange and ghostly-seeming pause. Listen carefully and you can hear its whispered message: the journey is beginning again.

    Itā€™s a time to consider what has passed, and look to what is ahead. How can we make this year even better than the last?

    This is the question I want to examine ā€” as it pertains to New World Same Humans ā€” in this note. That means first, and briefly, a review of 2022. And then, more important: whatā€™s coming this year?

    This first instalment of the year, then, is more about our community than it is about the world out there, which is our usual subject. Thereā€™s some thinking aloud going on here; an attempt on my part to make sense of where the newsletter has been and where itā€™s going. But given the precious attention you spend on NWSH ā€” and Iā€™m so grateful you do ā€” I hope itā€™s valuable for you to ride along as I figure all this out. And, of course, all feedback and suggestions are welcome.

    Whatā€™s more, without drinking any self-help kool aid, it seems to me that thereā€™s a lesson in the journey I went on with the newsletter last year. One about defeating perfectionism, being adaptable, and playing infinite games.

    But we can get to that at the end. First, letā€™s dive into a review of 2022.

    Whatā€™s past is prologue

    Before we can think coherently about where to take NWSH in 2023, we need to understand what just happened.

    This is where things get a little awkward.

    Back in January 2022, as many of you will remember, I set sail towards a renewed vision for the newsletter. I wanted to double-down on what makes NWSH unique; to accelerate the newsletterā€™s journey towards itself.

    Conceptually, that meant a project animated by three questions:

    * What is the nature of technological modernity?

    * What is the nature of a human being, and the human collective?

    * What new forms of life are possible, and desirable?

    And when it came to content, it meant the launch of a new schedule. While the flagship mid-week update would continue, the weekly note on Sunday was to be killed and replaced with monthly longform essays.

    Weā€™ll come back to the conceptual part. As for the new content schedule ā€” as many of you noticed, it didnā€™t work quite as planned. What happened?

    Essentially, my pandemic and post-pandemic realities collided. This newsletter was born at the start of Covid; the first year was produced inside the strange empty-yet-also-frenzied deadzone that were the 2020 lockdowns, and that produced a particular set of working practises around writing instalments and getting them out. In 2022, the world opened up again. That meant a return, for me, to a frenetic schedule of working with clients and speaking at events. Which was great in lots of ways. But it brought disruption to the way I worked on NWSH.

    Meanwhile, the first essay Iā€™d planned, The Worlds to Come, ballooned to something far beyond what Iā€™d intended for the monthly essays. Having published just two instalments (embarrassing) of a projected five, itā€™s clear this work is something closer to a short book than an essay. Iā€™m excited to keep putting these ideas into the world. But as this piece expanded before my eyes, any remaining chance of sticking to planned the monthly essays format slipped away.

    The mid-week instalments are what rescued all this. They are the engine of the newsletter and the product most people associate with NWSH. And they stayed strong, growing longer and deeper without me ever really intending that. After some of you requested it I started recording them as a podcast; thousands now listen rather than read. These instalments found their way into the inboxes of thousands of new (and cherished!) readers, including some influential people, and ensured that our community continued to grow. Overall ā€” and despite the monthly essays misfire ā€” it was a great year for NWSH. Thatā€™s thanks to the mid-week update, and all of you who share it.

    Thatā€™s a two-minute summary of the last 12 months. The big question, then: what next?

    Coming in 2023

    My first thought is that the fundamental positioning I outlined last year is one I still stand by.

    As loathe as I am to quote myself, itā€™s worth revisiting that briefly. Around one year ago to the day, I wrote this on the point of view that NWSH would bring to its mission to understand our shared future.

    We live amid a white-hot technological revolution, a culture war, and a crisis of ecological collapse. Amid that, our systems of liberal democracy and technologically mediated consumerism are exhausted. We all know we must change course, yet we continue to march in the same old direction. In 2022, as Gramsci observed of his own society in the early 1930s, ā€˜the old is dying, but the new cannot be bornā€™. Except it is being born somewhere out there, on the fringes. I want us to travel to those places, literally and figuratively.

    Iā€™d still go along with all that.

    So itā€™s not the destination that needs to change; only the steps weā€™re taking to get there. Over the Christmas break I sat with that challenge. Hereā€™s what I decided:

    * The mid-week update remains the flagship instalment

    * Longform essays will remain, but theyā€™ll be occasional rather than monthly

    * Shorter notes will return; also on occasional schedule, typically on a Sunday

    When it comes to the mid-week update, the decision was automatic: if it ainā€™t broke.

    On longform essays: I still want space for the deeper thinking and exploration they allow. A monthly cadence didnā€™t work out, but occasional essays can. The first mission here is to finish The Worlds to Come.

    The return of shorter notes is the biggest change. I really miss writing the kinds of notes I used to send on a Sunday. And the newsletter needs a space for thoughts that are too long for a segment in the mid-week update, but too short and maybe too fuzzy to make an essay.

    But it goes deeper than that. Part of what I love about newsletters ā€” about the email newsletter as a new literary mode ā€” is its intimacy. Sure, weā€™re now amid a newsletter explosion, and Iā€™m sending this to you via a platform created in Silicon Valley and funded by mega-VCs. But despite all that, thereā€™s still something going on here that echoes the the mediumā€™s origins in the long emails from one friend to another that we used to do in the 90s. Last year, NWSH lost touch with that intimacy. This year I want to recover it.

    Short notes will allow me to send more personal reflections and do more exploratory thinking. And they can mean new kinds of content, such as reflections on the books Iā€™m reading. This could even spell the beginning of a NWSH book club, which is something people have asked for in the Slack group.

    But there I go again, piling on more before weā€™ve even started.

    Infinite games

    There we have it; the roadmap for 2023.

    Sure, 2022 didnā€™t work quite as planned. But while I might have expected that to bother me massively, in truth it doesnā€™t. Sitting with that over the break, I realised that this truth is a product of perspective. More particularly, of the perspective you necessarily take on something when you commit to if for life.

    Writing this newsletter, and building this community, is something Iā€™ll do forever. Not, in the end, because of the outcomes it produces, but for the meaning and simple joy I find in thinking through ideas about our shared future and then sharing those ideas with others.

    And given that, the monthly essays misfire seems only a tiny blip on a long journey.

    The lesson here? From my POV, itā€™s that when you embark on a project for the long haul, and when that project is an end it itself rather than only a means to some other end, youā€™re liberated into a new and fruitful way of seeing. One that helps you defeat perfectionism, stay adaptable, and find meaning in the process rather than only the results.

    When thereā€™s always tomorrow, and next year, and, I hope, next decade ā€” when youā€™re playing what has became known as an infinite game ā€” you have freedom to experiment, and mistakes donā€™t matter that much. In fact, if you arenā€™t making mistakes, thatā€™s probably be a sign youā€™re playing too safe.

    Thatā€™s not a perspective many of us get to enjoy in our work, which is so often target and deadline driven. But itā€™s a powerful one. So I recommend asking yourself: what infinite game are you playing in 2023?

    Blast Off

    The plan for this year is set. All that remains is to get to work.

    And given the moment weā€™re living in from a world-historical perspective ā€” by turns weird, exhilarating, and scary ā€” I couldnā€™t more excited about what is ahead for our community. Our mission to make sense of a changing world and its collision human nature ā€” new world, same humans ā€” has never been more urgent.

    Iā€™ll send the first New Week update next week. And expect the first short note in the coming days, too. One of which will launch a new project that I canā€™t wait to tell you more about.

    In the meantime, thanks for joining me on this adventure for another year; itā€™s deeply appreciated šŸ™. And I hope youā€™re off to a great start on your journey through 2023, too.

    Until next week, be well,

    David.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz
  • Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

    If youā€™re reading this and havenā€™t yet subscribed, join 24,000+ curious souls on a journey to build a better future šŸš€šŸ”®

    To Begin

    Iā€™m giving myself permission to play with the format of these New Week updates.

    This week, four stories caught my eye. Any one of them might lead an ordinary instalment of the newsletter. Taken together, they amount to a snapshot of the future-facing terrain that NWSH inhabits.

    Weā€™re living through an incredible moment when it comes to the collision between emerging technologies and fundamental human needs. As 2022 draws to a close, it felt fitting to capture this moment in time. And to look out to the year ahead ā€” each of these four stories is set to evolve in intriguing ways in 2023.

    Iā€™m also turning this email into a virtual time capsule, to be buried now and recovered in a NWSH instalment ten years hence. More details in the final section.

    So think of this as an end of year special edition (and forgive the fact that it comes late). More Christmas extras are coming soon! But for now, letā€™s get into it.

    šŸ“– Next generation

    This week, yet another wave of advances in generative AI.

    OpenAI released a new GPT-3 model, text-davinci-003. Early indications are that the new model is a significant improvement; itā€™s better at following instructions, outputs higher quality writing, and more often remains coherent and useful across longform narrative.

    A few days later OpenAI released ChatGPT, a version of GPT-3 optimised to respond to complex questions and enter into dialogue. Cue a storm of Tweets heralding the emergence of a whole new kind of online search:

    People also noticed that davinci-003 and ChatGPT are better at rhymes, poetry, and various forms of creative writing:

    As for text-to-image generation, that didnā€™t stand still either. Stability AI released Stable Diffusion 2.0. Image resolution has been improved by a factor of four; the tool now generates eerily convincing photorealism. New guard rails also make it harder to generate NSFW images. Hereā€™s Stability AI founder Emad Mostaque:

    āš” NWSH Take: Text-davinci-003 feels something close to a step change; some are calling it GPT-3.5. Itā€™s clear, if it wasnā€™t before, that large language models (LLMs) will have multiple breakthrough applications. Search is at the top of that list; see Metaphor, a startup experimenting with generative AI for online search. Or Consensus, a generative search engine for published scientific research. Iā€™ve written often in this newsletter about the coming age of Virtual Companions; AI fuelled entities that become counsellors, friends, and sages to their users. With LLMs, theyā€™re coming, and this week Open AI CEO Sam Altman pointed towards that future. // Meanwhile, itā€™s only three months since Stable Diffusion was made public. Now we have a quantum leap in image quality. Mostaqueā€™s belief that these tools constitute a Gutenberg moment is credible even if you only consider text-to-image tools. But listen to interviews and youā€™ll learn that Stability AI is also working on text-to-music, and that the end game according to Mostaque is text to anything: movies, PPT presentations, virtual worlds. // What can you say? Rumour has it that GPT-4 is months away. Weā€™re at the beginning of a decade-long wave of innovation. Next year is going to be insane. Who knows where weā€™ll be by 2032.

    šŸš€ Give me some space

    NASAā€™s Orion spacecraft this week reached the apex of its journey beyond the far side of the Moon.

    The craft hit a point 434,523 kilometres from Earth, and 64,374 kilometres beyond the Moon, before swinging around and starting on its course back to Earth. At the midpoint of its journey it took this picture ā€” an instant space age classic:

    Orion is uncrewed, but this is the furthest from Earth a spacecraft meant for human occupation has ever travelled; the record was previously held by Apollo 13, which did have people onboard, and flew in 1970.

    āš” NWSH Take: This Orion mission is a test ā€” the first stage of NASAā€™s Artemis mission to put humans back on Moon by 2025 and establish a permanent base there by end of decade. // So far, that project is more or less on track. No wonder, then, that analysts are already talking about the coming Moon economy: a new wave of startups catering to our next foray into space. Meanwhile, with China planning its own base on the Moon for 2028, Iā€™ve written on how the Moon is about to become a new site of geopolitical tension; NASA are drawing up frameworks to deal with lunar conflict. // The coming 12 months, then, will set the stage for a new space age: one intended to see us establish the Moon as a base from which we set out to Mars and beyond. When I revisit this in ten years, it should be possible to email humanityā€™s lunar waystation.

    šŸ§  Trick of the mind

    Brain-machine interface startup Neuralink hosted a Show and Tell event this week.

    On stage, Elon Musk announced that the company is around six months away from starting human trials of its N1 brain implant. In the meantime, trials on animals continue. Musk played a video that showed a monkey using the implant to ā€˜telepathically typeā€™ ā€” using his mind to control an on-screen cursor and type the words ā€˜Welcome to Show and Tellā€™ by directing his attention to highlighted letters.

    āš” NWSH Take: Neuralink wonā€™t be the first to implant these kinds of interface chips in human brains; another US startup, Synchron, began human trials in Australia in 2020. And the technology has been around for around two decades. Regulatory approval, and proof that the technology is safe for humans, has been the major roadblock. // Musk previously said that human trials would begin in 2020, and then in 2022. But put this news announcement together with Synchronā€™s ongoing work and other developments, and itā€™s clear that weā€™re approaching technologies that will be transformative for people who are paralysed, and may eventually revolutionise the way all of us interact with technology. Neuralink is waiting for FDA approval before it starts human trials. The world will watch for an update in the first few months of 2023.

    šŸ§¬ Copy and paste

    Also this week, a significant CRISPR advance.

    MIT scientists showcased a new technique that could massively upscale the usefulness of CRISPR gene editing.

    The ā€˜drag and dropā€™ method, which the scientists are calling PASTE, allows for the insertion of large sections of DNA ā€” as long as 36,000 DNA base pairs ā€” into human cells. That means it could prove useful for the treatment of diseases caused by faulty genes with large numbers of mutations, including cystic fibrosis.

    Omar Abudayyeh, a McGovern Fellow at MITā€™s McGovern Institute for Brain Research, said:

    ā€˜Itā€™s a new genetic way of potentially targeting these really hard to treat diseases. We wanted to work toward what gene therapy was supposed to do at its original inception, which is to replace genes, not just correct individual mutations.ā€™

    The full paper was published this week in the journal Nature Biotechnology.

    āš” NWSH Take: The FDA approved a revolutionary drug treatment for cystic fibrosis in 2019. It doesnā€™t work for everyone, and it costs over Ā£100,000 a year per patient. Now, weā€™re talking about genetic editing technologies that will cut and paste this awful disease out of existence. Thereā€™s still work to be done before PASTE can be used in this way. But it seems weā€™re close. // This new PASTE technique has implications when it comes to another NWSH obsession: longevity. Thereā€™s long been hope that CRISPR could help us modify the action of genes that regulate ageing. But those genes are often long and complex. Now, PASTE raises the possibility that weā€™ll be able to get a handle on them. Back in New Week #51 I wrote about Altos Labs, a new Silicon Valley startup pursuing the secrets to eternal life. And in NWSH #44 I wrote about the ways in which ageing populations are reshaping societies in the Global North; the trends outlined in that essay may be set to intensify in entirely new ways.

    Time to be Alive

    Thanks for reading this week.

    Iā€™m turning this weekā€™s instalment into a time capsule, buried now and intended to be recovered by my future self. Iā€™ll revisit this snapshot of the closing weeks of 2022 in December 2032, and reflect on the ideas discussed here and what happened in the intervening time. The calendar reminder is set; now all I need to do is survive the next ten years.

    Itā€™s impossible to know where weā€™ll be by then. All we can be sure of is that the ongoing collision between new technologies and our eternal shared nature will continue to underpin our collective story.

    This newsletter will keep watching every step of the way. And sharing that journey with you means a lot.

    If youā€™ve enjoyed NWSH this year, why not forward this email to someone whoā€™d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

    Iā€™ll be back next week. Until then, be well,

    David.

    P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz