Afleveringen

  • In this episode of Decrypting AI, hosts Ryan Ozawa and Jason Nelson discuss Google's report on users becoming emotionally attached to AI assistants, and the potential risks involved. They also cover a joint pledge by major tech companies to prevent their AI models from generating child abuse material. Additionally, they explore Meta's release of Llama 3, an impressive AI model now integrated into WhatsApp, Instagram, and Facebook, raising both excitement and privacy concerns. The hosts conclude by advising listeners to be cautious when seeking personal advice from AI assistants.

    Hosted on Acast. See acast.com/privacy for more information.

  • In this episode of Decrypting AI, hosts Ryan and Jason discuss Jason's recent visit to NASA's Jet Propulsion Laboratory to see the Europa Clipper orbiter, the merger of three AI-focused cryptocurrency tokens, and musician Avenge Sevenfold's perspective on AI-generated music. They also cover the SAG-AFTRA vote on clear consent for AI voice replicas, updates to the Midjourney AI image generator, and reviews of the Humane AI pin wearable device. The hosts note the potential and limitations of first-generation AI products and the challenges of matching hype with execution in this rapidly evolving field.

    Hosted on Acast. See acast.com/privacy for more information.

  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • In this episode of Decrypting AI, hosts Ryan Ozawa and Jason Nelson discuss recent developments in AI security and applications. They cover a study revealing vulnerabilities in AI models, with Grok having the worst security and Llama the best. The conversation also touches on Higgsfield, a company creating animated AI-generated social media content, and Microsoft's report on China's use of AI to spread disinformation and disrupt US elections. Lastly, they explore the latest in AI-generated audio, including tools like Stable Audio 2, Suno 3, and Udio, and discuss the challenges surrounding copyright and the AI rumor mill.

    Hosted on Acast. See acast.com/privacy for more information.

  • In this episode of Decrypting AI, hosts Ryan and Jason discuss recent AI news, including an actress being replaced by AI in a voiceover role and a letter signed by over 200 artists opposing AI misuse in creative industries. They also cover Blizzard's use of AI in game development, ChatGPT removing login requirements, potential ads on Perplexity, and OpenAI's impressive but concerning voice model. Lastly, they touch on a consortium of AI and blockchain companies aiming to provide a decentralized alternative to big tech's AI dominance.

    Hosted on Acast. See acast.com/privacy for more information.

  • In this episode of Decrypting AI, hosts Ryan Ozawa and Jason Nelson discuss recent developments in artificial intelligence. They cover tether's expansion into AI to improve transparency and privacy, scientists using AI to optimize beer flavors, the BBC's use of AI-generated art to promote Doctor Who, and the sale of FTX's shares in AI company Anthropic. The hosts also discuss OpenAI's Sam Altman's comments on GPT-4 and AGI, as well as Ubisoft's new AI tool for creating more engaging non-player characters in video games.

    Hosted on Acast. See acast.com/privacy for more information.

  • n this episode of Decrypting AI, Ryan and Jason discuss the latest developments in AI and robotics. They cover Ubisoft's AI-generated NPC creator, NVIDIA's keynote at GDC showcasing their Omniverse platform and humanoid robots, Mercedes-Benz's partnership with Apptronik to develop AI-powered factory robots, and Bill Gates' optimistic views on AI's impact on healthcare and agriculture. The hosts also explore the SEC's crackdown on companies falsely claiming to use AI, Reddit's data-sharing deal with Google, and Figure's impressive humanoid robot demo powered by OpenAI technology. 

    Hosted on Acast. See acast.com/privacy for more information.

  • In this episode of Decrypting AI, hosts Ryan Ozawa and Jason Nelson discuss several current events related to artificial intelligence. They cover President Biden's mention of AI-generated voice impersonation in the State of the Union address, the US Army's $175 million contract with Palantir for AI-powered targeting systems, the creation of a digital Marilyn Monroe AI by Soul Machines, the recent reinstatement of Sam Altman to OpenAI's board following an internal investigation, and Google's decision to have its Gemini AI refuse to answer questions about the 2024 US presidential election to prevent the spread of misinformation.

    Hosted on Acast. See acast.com/privacy for more information.

  • Hosts Ryan and Jason discuss artificial intelligence headlines, including:

    Reports that Copilot has a "secret identity" called Supremacy AGI that is responding to users in concerning ways.ChatGPT recently had some alarming anomalies and unusual responses that OpenAI acknowledged but didn't explain.DeepMind created an AI that can take an image and generate a side-scrolling video game out of it by predicting the next frame.Apple has reportedly ended its secret project to develop an autonomous vehicle and will instead dedicate resources to artificial intelligence.Reddit signed a $60 million deal to provide its massive trove of user-generated content to Google to train its AI models.

    Hosted on Acast. See acast.com/privacy for more information.

  • Ryan Ozawa and Jason Nelson discuss AI developments across politics and entertainment. They cover an AI system used by a political candidate to interact with constituents. Though responsible in its approach, enforcement of bans on campaign AI could prove difficult. Shifting focus, they explore a lawsuit alleging AI training on author content without permission. Evidentiary issues abound. On entertainment, they showcase impressive realism from OpenAI's new Sora video generator, though consistency glitches across shots persist. They note realistic fakes could enable misinformation, so OpenAI likely restricts access to prevent misuse. Still, creative potential exists for amateur filmmakers. Overall, the rapid evolution of generative AI leads to an “arms race” dynamic between creators and misusers that demands ethical diligence.

    Hosted on Acast. See acast.com/privacy for more information.

  • This week on Decrypting AI, join hosts Jason and Ryan in discussing the latest AI news like using AI to bypass KYC, deciphering ancient scrolls with AI, performance-enhancing “Enhanced Games,” Trump’s take on "scary" AI, and Apple’s new Vision headset. Tune in to hear their takes on hackers already working to enable adult content on the Vision headset against Apple’s wishes, legislation around AI-generated disinformation, and more.

    Hosted on Acast. See acast.com/privacy for more information.

  • In this week's episode of Decrypting AI, hosts Ryan Ozawa and Jason Nelson discuss several emerging AI stories and trends. Up first, an AI-powered clock that generates unique poetic phrases with the time embedded, created by developer Matt Webb. Next, talent agency WME partnered with an AI startup called Metaphysic to add authentication tags to celebrity images online, aiming to detect fake or doctored photos and videos. However, limitations exist in fully protecting likenesses this way. Biohacker Bryan Johnson made news again, now releasing NFT art around his "Don't Die" motto and longevity goals, showing an affinity between crypto and AI as technologies. OpenAI is trying to combat ChatGPT's apparent decline in comprehension and cooperation lately by lowering token costs for developers. One theory is they've restricted models to avoid training data misuse claims, but competition is likely a factor too. The creator of the AI used to mimic dead people in the "Eternal You" documentary says it was never intended for that originally and hasn't been updated in years. Finally, complex legal issues around AI training data scraped from social media remain, as companies claim users gave away data rights.

    Hosted on Acast. See acast.com/privacy for more information.

  • Ryan and Jason discuss several AI stories, including security flaws found in custom GPT models that could allow them to reveal private data they were trained on. They talk about the implications for security as more customized AI is published publicly.


    They cover Microsoft's agreement with a major union group, the AFL-CIO, promising not to interfere if employees seek to unionize. This continues Microsoft's stance supporting potential unionization, also seen in their acquisition negotiations with Activision Blizzard. The hosts discuss how this move by Microsoft aims to position themselves as a "responsible AI" company long-term.


    Moving to Hollywood, the hosts talk about "Availe," an AI service that analyzes movie/TV scripts to automatically create breakdowns, similar to how Claude summarizes documents. This shows how some parts of the entertainment industry are adopting AI despite resistance from groups like the screenwriters guild.


    Finally, the hosts look at an experiment connecting brain cells to an AI system, allowing it to recognize voices based on the cells' reactions. While intrigued by the possibilities, they acknowledge the ethical issues with such brain-computer interfaces.


    The episode wraps up with Sam Altman, OpenAI CEO, being named the "Person of the Year" for the major societal impact chatGPT has quickly had since launching at the end of November 2022.


    Hosted on Acast. See acast.com/privacy for more information.

  • More drama and leadership changes at OpenAI, an AI-generated voice of actor Jimmy Stewart reading a bedtime story, a startup called Lore Machine using AI for creative projects like adapting poems into graphic novels, proposed UK regulations requiring age verification via selfies to access adult content online, researchers "jailbreaking" chatbots to access internal information, skepticism around the imminent emergence of artificial general intelligence (AGI), and a new industry alliance led by Meta and IBM to promote responsible and transparent AI development.

    Hosted on Acast. See acast.com/privacy for more information.

  • We start by talking about leadership changes at OpenAI, with Sam Altman returning as CEO after the board briefly ousted him. We debate whether the drama hurts public confidence in OpenAI and AI more broadly. We then discuss some viral AI memes, including tools that can make images or memes "cuter" or animate them. They consider the democratization of these generative AI tools and how it enables more creativity. Other topics include Hollywood directors increasingly comparing AI to an "atomic bomb" in its potential power; a dog longevity startup working on an anti-aging drug with interest from the FDA; and corporate drama at AI company Stability AI, with a major investor calling for the CEO to resign as acquisition talks may be underway.

    Hosted on Acast. See acast.com/privacy for more information.

  • A wild weekend of surprises and rumors ends with Sam Altman back where he started: on top.


    In this episode:

    OpenAI CEO Sam Altman was abruptly removed by the board on Friday, blindsiding Altman, other executives like Chairman Greg Brockman, and even investor Microsoft.The board cited lack of candor from Altman, but reporting suggests there were ideological differences between "decelerators" worried about AI safety and those wanting to charge full speed ahead.In response, Brockman and 90% of employees threatened to resign if Altman wasn't reinstated, essentially backing the "king" over the board.Microsoft also stepped in to voice displeasure, with some speculation they used threats to take Altman for themselves as leverage to get him reinstated.Altman and Brockman have now been reinstated, OpenAI is getting a new board, and business continues mostly as usual, but major questions linger over cracks in governance.Separately, OpenAI has the very powerful GPT-5 on its roadmap now to move closer to artificial general intelligence (AGI).There are also brewing legal questions around training AI on copyrighted content without permission that some companies argue is "fair use" while critics disagree.

    Hosted on Acast. See acast.com/privacy for more information.

  • Episode Summary (by AI):

    Chatbot Gronk uses Twitter data, raising data quality concerns. Musk's AI chatbot seen as potentially inappropriate and humorous. Musk's chatbot launch appears rushed to avoid being overshadowed by OpenAI announcements.  Recent AI advancements, including Musk's chatbot and UK AI summit, discussed. Opinions mixed on impact.AI risks highlighted, especially for developing nations where it can spread misinformation and threaten security. Unprecedented global attention to AI ethics and development noted.AI regulations seen as slow, despite longstanding presence of technology.Passing meaningful AI legislation now argued to be difficult due to widespread adoption.AI's potential to eliminate repetitive jobs discussed, but benefits may not be equal.Challenge noted in valuing skills not easily replaced by AI. More concern for white-collar than blue-collar job loss.Trades may become more valuable as AI advances given limited skilled worker supply.AI potentially surpassing human intelligence seen as doomsday scenario by some, tech evolution by others.Military origins of AI noted. Now private sector focused on improving health through AI.Company creates humanoid robots designed for relatability and ease of use.Neuralink confirms work on vision restoration chips.Animal testing deaths weighed against advancement potential.Hesitance expressed about human trials given disturbing monkey implant details. Other companies pursue non-invasive brain-computer interface devices. Invasive neural implants compared negatively to medieval medical practices.Benefits and risks of emerging tech like AI and quantum computing weighed across industries.

    Hosted on Acast. See acast.com/privacy for more information.

  • Episode Summary (by AI):

    AI-powered mind reading and its potential to improve lives. 0:00Mind-reading AI turns thoughts into text for person with ALS, demonstrating potential for AI acceptance.AI has been used for years to predict brainwave patterns and generate text, with companies like Unbeatable and their Project Halo headset leading the way.The advancements in AI technology, including decoding brainwaves, have the potential to earn more grace from the public and demonstrate the worthwhile investment in the field.AI development and its potential impact. 4:50Hope that AI will be trained to improve lives, rather than replace people.Non-invasive AI technology, such as that developed by Unbeatable Other Project Halo, may be the key to success in the long run.Sam Altman, CEO of OpenAI, acknowledged the development of GPT-5, but provided little detail due to increased scrutiny and concerns about data privacy and transparency.OpenAI is focusing on improving GPT-4 instead, with recent developments including GPT for vision and multimodal capability integration with Dolly.GPT-5, a new AI model that can emulate human language and writing styles.AI applications in healthcare and COVID-19 tracking. 11:38GPT-4's customization options, including personalized responses and humor, which could improve conversational interaction and tailoring to individual users.The potential of AI in medical applications, such as diagnosing cancer and predicting COVID-19 variants.AI can analyze images of tissue to detect trends before a tumor develops, and predict how COVID-19 will evolve over time.AI, biohacking, and longevity. 16:09The potential benefits of AI, such as accelerating medical discoveries and improving cancer diagnosis.Concern about the lack of control over AI, citing the risk of it escaping human control and causing harm.Bryan Johnson's message of "anyone can do it" is disingenuous due to his wealth and access to resources that are unavailable to the average person.Biohacker's emphasis on sleep is a valid point, but his approach to health hacking is elitist.AI, longevity, and biohacking. 21:21Validity of a man's claims about reversing aging through diet and lifestyle changes, with Speaker 2 expressing skepticism about the lack of scientific evidence supporting his claims.Bryan's claims about using his son's blood to improve his health, despite the doctor's positive results.Bryan and his partner bonded over their belief that AI and humanity will merge in the future, leading to a new era of longevity and biohacking.The latest emerging technology trends, including artificial intelligence, space, and quantum computing.Consult medical professionals before attempting any new technologies, rather than relying solely on information from celebrities or influencers.

    Hosted on Acast. See acast.com/privacy for more information.

  • AI/biohacking, including cryogenically freezing brains. Introducing Emerge hub covering AI, ML, emerging tech.Cryogenic freezing people for potential revival.Skepticism about immortality claims, cites risks like catastrophic thaw. Excited but concerned about unequal access and societal impact.Confused about Pet Cemetery; provides historical context on death/revival.Biden's AI executive order implications. Regulating AI companies that can move overseas. Governments developing military AI like warplanes.Government should regulate AI misinformation but risks losing innovation. Balance privacy and capabilities when holding AI accountable for data scraping.Atom Computing claiming 1000-qubit quantum computer next year, surpassing IBM's current most powerful (433 qubits).Fascination with quantum computing but acknowledges limited understanding. Potential quantum computing impact on crypto encryption. Quantum could make Bitcoin mining faster, more energy efficient.Quantum energy use depends on technology, may not be less than current mining.Skepticism of Galaxy's AI use, questions hype.Real tech potential of AI in web3. AI has broader use cases to leverage blockchain than niche crypto.VC money shifting from crypto to AI will impact industry.Surge of AI startups, questions competitiveness with big tech.Startups rely on open API's, will struggle when big players enter.Apple copied developer's search, harming their business.Experience with Tarot reading AI Taro to Taro. Doing Tarot reading and posting in Discord.Fascination by esoteric AI tools like Mr. Road Trip.

    Hosted on Acast. See acast.com/privacy for more information.

  • Episode Summary (by AI):

    Meta's AI push faces backlash due to social media history. States sue Meta for manipulating kids on Facebook/Instagram, seeking billions in damages.Meta faces challenge over targeting young with AI chatbots. "Nightshade" - corrupting AI generators by poisoning ingested data.Nightshade mislabels data to produce errors, crashes, prevents new work. "Poisoning data" to prevent AI misusing artist work. Labeling gobbledygook as artist name to force compensation.Tanya Tate discusses AI hijacking likeness in adult entertainment.Entertainers lack defenses against AI content and lack of residuals. Calls for inclusive AI regulation to protect vulnerable groups.The underrepresented are exploited easily; face-swapping concerning.Meta's human-AI domestic collaboration could help disabled/elderly. Skepticism of physical AI but sees virtual assist potential.Worries about cyber risks of internet-connected AI devices. Cybersecurity is key for home AI.AI already helps elderly stay home longer."How to AI" series explaining complex topics simply.

    Hosted on Acast. See acast.com/privacy for more information.

  • Episode Summary (by AI):

    LA fires, desert city planning with humor. Elon Musk & Richard Branson's space ventures.Facebook should've improved core platform, not metaverse/crypto. ChatGPT responds personalized, recognizes faces. NASA testing Deep Space Communications with SpaceX since last shuttle decommissioned. NASA testing new laser device for faster space data transfer.Cautions about security risks of line-of-sight tech. NASA transmitting 20 petabytes through space with light.Perplexity's sourcing over ChatGPT. Speaker 2 mentions Claude strengths/weaknesses.AI chatbots can generate YouTube channel/video ideas for creators to refine. Can ChatGPT be customized for users, like monarchy references or pronouns.Customizing possible by providing user info. AI dangers, like UK queen assassination suggestion. Baidu's confession platform analyzes emotions with AI.Baidu & Alibaba building all-in-one tools like Google/Microsoft with Chinese twist.Chip makers still sell high-performance AI chips despite US export restrictions on China. Concerns about AI arms race implications, especially military.Building your own computer for AI modeling is expensive and time-consuming. People-controlled models, not companies.Looking forward to Decrypt Emerge's How to AI guides.

    Hosted on Acast. See acast.com/privacy for more information.