Afleveringen
-
Summary
In this episode of the Wild Dog AI podcast, Buddy Jericho interviews Chris Poulter and Jane van Tienen from OSINT Combine about the upcoming Australian OSINT Symposium. They discuss the evolution of OSINT, the role of emerging technologies, and the social responsibility of OSINT technologies. They also talk about the growth of the Symposium and the importance of networking and engagement. Chris Poulter shares his excitement about exploring the integration of AI into OSINT and the challenges and opportunities it presents. They also touch on the importance of methodology and the use of retrieval augmentation (RAG) in OSINT. The conversation explores the value of OSINT and the development of technologies like RAGs. The speakers discuss the significance of refining models and the low barrier to entry in using OSINT. They also touch on the ethical challenges of emerging technologies and the importance of international participation in the OSINT community. The OSINT Symposium is described as a learning and networking opportunity with a focus on OSINT tailwinds. The event includes keynote speeches, panel discussions, case studies, and opportunities for interaction and collaboration. The virtual training day is also highlighted as an opportunity to gain practical skills in OSINT.Thank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
Summary
Source validation is crucial in ensuring the accuracy and reliability of data used in AI models.Analysts should be able to defend their assessments and validate the data they provide to make informed decisions.AI should be used as a tool to enhance workflow, but human expertise is still essential in interpreting and validating information.Validating sources in the AI era can be challenging due to the vast amount of data and the potential for misinformation.Efforts are being made to improve source transparency and develop ethical frameworks in AI development.
In this episode, Senior Analyst Heather Perez discusses the importance of source validation in the age of artificial intelligence (AI). She emphasizes the need for analysts to be able to defend their assessments and validate the data they provide. Heather also highlights the limitations of AI models and the importance of human expertise in interpreting and validating the information generated by these models. She suggests that analysts should use AI as a tool to enhance their workflow, but not rely solely on it. Additionally, she discusses the challenges of validating sources in the AI era and the potential risks of misinformation. Heather also mentions the efforts being made to improve source transparency and the need for ethical frameworks and policies in AI development.
TakeawaysChapters
00:00 Welcome
02:42 Understanding the Role of Language Models in Synthesizing Information
06:13 Using AI as a Tool: Collecting and Validating Data
09:32 Challenges of Tracking AI Data Sources
10:00 The Implications of Misinformation in the Age of AI
23:02 Enhancing Source Transparency
27:10 The Role of Fact Checkers
31:23 The Need for Ethical Frameworks
35:10 Layering Models to Improve AccuracyThank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
Zijn er afleveringen die ontbreken?
-
Summary
Counterintelligence and insider threat programs are crucial for protecting businesses and national security.Coordination between government and corporate entities is essential in addressing insider threats.Technology companies, especially those working with the government, need to prioritize the protection of their proprietary information.Building a comprehensive insider threat program involves both technological solutions and employee engagement and education.Considering the supply chain and ensuring the security of materials and components is vital in developing secure technologies. Small and medium-sized technology companies should prioritize building an insider threat program to protect against potential security breaches.Six steps to building an insider threat program include understanding your supply chain, outsourcing research, and implementing a training program.Small businesses should be cautious when hiring foreign nationals and conduct due diligence to ensure their security.AI and generative AI can greatly enhance supply chain security and counterintelligence efforts by quickly categorizing and analyzing large volumes of data.Technologies like Logically AI, Flashpoint, and LexisNexis can be leveraged to aid in building insider threat programs.The cost of building these programs is decreasing, making it more accessible for small businesses to implement effective security measures.
Jeremy Clark and Scott Sparks, co-founders of IXN Solutions, discuss the importance of counterintelligence (CI) and insider threat programs in protecting businesses and national security. They highlight the case of Lin Wai Ding, a former Google employee who sold proprietary information to China, as an example of the serious threats faced by technology companies. The conversation emphasizes the need for coordination between government and corporate entities to address these threats. Jeremy and Scott also stress the importance of building a comprehensive insider threat program that includes both technological solutions and employee engagement and education. In this conversation, Jeremy Clark and Scott Sparks discuss the importance of building an insider threat program for small and medium-sized technology companies. They provide six steps to building such a program, including understanding your supply chain, outsourcing research, and implementing a training program. They also emphasize the need for small businesses to protect against hiring foreign nationals and the importance of leveraging AI and generative AI in supply chain security and counterintelligence efforts.
TakeawaysThank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
In this thought-provoking episode of "Navigating Technology in Intelligence: Balancing Innovation and Ethics," we delve into the complex landscape of modern law enforcement's interaction with emerging technologies. Joined by Chief Philip Lukens, a seasoned veteran in law enforcement innovation, we explore the apprehensions and ethical considerations that agencies face today.
Chief Lukens brings his extensive experience from working in various capacities, including his role in significant projects with the National Academy of Sciences and the National Institute of Justice. Throughout our discussion, we unpack the challenges law enforcement agencies encounter, from the initial reluctance to adopt new technologies to the strategies that can facilitate more seamless integration.
The conversation also covers the GAI Model Policy, providing insights into how current policies shape the adoption of technologies and the importance of proactive engagement rather than a reactive "wait and see" approach. We discuss the urgency for law enforcement to embrace technology not just for efficiency, but as a critical tool in maintaining a step ahead of adversaries.
Moreover, Chief Lukens shares invaluable advice for young officers and agency leaders on leveraging technology to enhance investigative and operational capabilities while ensuring adherence to ethical standards and the protection of civil liberties.
For more insights from Chief Lukens, explore his thoughts and writings on his Substack: https://philiplukens.substack.com/
Tune into this essential discussion that sheds light on the necessity of innovation balanced with ethics in the intelligence and law enforcement communities. Don’t miss out—discover how technology is reshaping security and surveillance landscapes and what this means for the future of law enforcement.
Thank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
Overview
Language models are constantly being manipulated by adversaries and bad actors to spread propaganda and manipulate information. The manipulation of language models is different from traditional software applications and requires constant updates and training; a challenge at the pace technology is advancing. Language models can be manipulated by uploading PDF documents with new instructions or by configuring APIs with read and write privileges. Terrorist groups, extremist organizations, and cybercriminals use language models to spread disinformation, amplify malware and phishing attacks, and recruit new members. Law enforcement and intelligence professionals need to understand and leverage AI tools to combat the manipulation of language models.
Explore the critical importance of law enforcement and intel professionals adopting and integrating emerging technologies. Discuss the risks associated with falling behind adversaries who leverage these technologies and the impact of security measures.
In this episode, Indago Founder and CEO, Buddy Jericho and Senior Intelligence Analyst, Heather Perez, discuss the adversarial and bad actors' use of language models in manipulating information and spreading propaganda. They highlight the challenges of manipulating language models and the constant need to update and train them. They cite examples of how language models are being manipulated, such as uploading PDF documents with new instructions for the model's operation. They also discuss the manipulation of content and the importance of configuring them securely. The conversation delves into the use of language models by terrorist groups, extremist organizations, and cybercriminals to spread disinformation, amplify malware and phishing attacks, as well as, recruit new members. The conversation emphasizes the need for law enforcement and intelligence professionals to understand and leverage AI tools to combat these threats.
KEY TAKEAWAYS
SOUND BITES
"Deeps fakes are being used for disinformation, fraud schemes, propaganda, and AI-powered malware."
"Russian bot farms are using AI to run numerous fictitious accounts and create authentic appearing social media personas targeting the younger generation for recruitment."Thank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
Summary
Understanding the perspectives and motivations of individuals is crucial in intelligence analysis and applies to language models as well.Criminal and extremist groups are leveraging generative AI for various purposes, including fraud, propaganda, recruitment, and disinformation.Effective questioning techniques are essential when interacting with language models to obtain accurate and relevant information.Human oversight is necessary to ensure that language models produce contextually appropriate outputs and to add analytical insights.Operationalized misinformation at scale is a concern, and skepticism is warranted when using language models in law enforcement.Continued education and adaptation are necessary as AI technology evolves rapidly. Generative AI is a powerful tool that can enhance the analytical process, but it should not replace human intuition and expertise.Analysts need to prioritize continuous learning and adapt to new technologies to incorporate them effectively into their workflow.Assessing and verifying information is crucial, as generative AI can be used to operationalize misinformation and disinformation.Domain expertise and critical thinking skills are essential in interpreting AI-generated data and providing nuanced insights.Understanding the sources and limitations of AI-generated data is important for analysts to make informed decisions.
In this episode, host Buddy Jericho is joined by Lead Analyst, Randy Stickley, and Senior Analyst, Heather Perez, to discuss the topics of critical thinking and generative AI. They explore the importance of understanding the perspectives and motivations of individuals in intelligence analysis and how this applies to language models. They also touch on the use of AI by criminal and extremist groups for fraud, propaganda, recruitment, and disinformation. The conversation highlights the need for effective questioning techniques when interacting with language models and the importance of human oversight to ensure accurate and contextually appropriate outputs. The conversation explores the impact of generative AI on intelligence analysis and the importance of critical thinking and domain expertise in interpreting AI-generated data. The guests discuss the potential for misinformation and disinformation to be operationalized through AI platforms and the need for assessing and verifying information. They emphasize the role of generative AI as a tool to enhance the analytical process, but not to replace human intuition and expertise. The importance of continuous learning and adapting to new technologies is highlighted, as well as the need for analysts to understand the sources and limitations of AI-generated data. The conversation concludes with a discussion on the future of generative AI and its role in intelligence analysis.
TakeawaysSound Bites
"Understanding the perspectives and motivations of individualsThank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
Overview
Safe Super Intelligence Incorporated (SSI) is a new company founded by Ilya Sutskever focusing on the safety and security of generative AI.SSI aims to build technologies that ensure language models stay safe and do not go rogue, especially as we approach artificial general intelligence (AGI).The company could provide tools and protocols to combat the flood of information and sift through data to identify potential threats and trends.SSI's technologies could help law enforcement and the intelligence community optimize their workflows, enhance threat detection, and counter malicious actors.The focus should be on understanding and utilizing AI tools responsibly to protect national security and the rights of citizens. Generative AI poses risks in terms of overwhelming content, misinformation, deepfakes, and cybersecurity.Safety protocols, education, and regulation are needed to address these challenges.Understanding how AI models work is crucial to avoid unintentional misinformation.Generative AI is already being widely adopted in various industries.The potential role of Safe Super Intelligence (SSI) in addressing these challenges is discussed.
In this episode, Buddy Jericho, Founder and CEO of Indago, discusses the safety and security of generative AI, focusing on the recent departure of Ilya Sutskever from OpenAI and the launch of his new company, Safe Super Intelligence Incorporated (SSI). Jericho is joined by Heather Perez, senior analyst, and Amy Do, Chief Marketing Officer. They explore the potential impact of SSI in addressing the challenges of using large language models in law enforcement and the intelligence community. The conversation also touches on the need for tools and technologies to combat deep fakes, misinformation, and cyber threats.
Much of the episode emphasizes the importance of understanding and utilizing AI tools responsibly to protect national security and the rights of citizens. The conversation explores the potential risks and challenges associated with generative AI, particularly in the areas of overwhelming content, misinformation, deepfakes, and cybersecurity. The panel discusses the need for safety protocols, education, and regulation to address these issues. They also touch on the ethical use of AI and the importance of understanding how AI models work to avoid unintentional misinformation. The conversation highlights the growing adoption of generative AI in various industries and the need for individuals to be aware of its presence in their daily lives. The potential role of Safe Super Intelligence (SSI) in addressing these challenges is also discussed.
Key Takeaways
Chapters
00:00 Introduction and Overview
26:20 Ilya Sutskever and the Launch of SSI
27:34 Building Tools for Law Enforcement and Intelligence
32:26 Combatting Deep Fakes and Misinformation
38:07 Ensuring Safety and Security in AI
39:07 The RisThank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
In this episode, Buddy Jericho interviews Heather Perez, an analyst with extensive experience in report writing. They discuss the challenges and time-consuming nature of traditional report writing methods and how generative AI, specifically Indago, is revolutionizing the process. Heather shares her insights on prompt engineering, the benefits of using templates, and the time-saving capabilities of Indago. She also highlights the ability to analyze documents in different languages and the importance of understanding and verifying data. Overall, the conversation emphasizes the efficiency and effectiveness of using generative AI in report writing.
Traditional report writing methods can be time-consuming and challenging.Generative AI, such as Indago, is revolutionizing the report writing process.Prompt engineering is crucial for obtaining accurate and relevant information from language models.Using templates can save time and ensure standardization in report writing.Generative AI allows for the analysis of documents in different languages.Understanding and verifying data is essential for maintaining the quality of reports.
TakeawaysThank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
Summary
In this episode of Wild Dog AI, Indago Founder and CEO Buddy Jericho, explores the difference between open-source and closed-source language models, and their impact on the AI community, as well as the defense and security space. The benefits and drawbacks of each model are discussed, along with their future implications. The conversation also covers the definition and components of language models, the pros and cons of open-source models, the pros and cons of closed-source models, and the arguments for using each model in the defense and intelligence sector. The conversation concludes by highlighting the importance of carefully weighing the specific operational needs, budget constraints, and long-term strategic goals when choosing between open and closed models.
Takeaways
Open-source language models offer transparency, customization, and community collaboration, but require technical expertise and lack dedicated support.
Closed-source language models provide professional support, regular updates, and security features, but come with licensing fees and limited flexibility.
The choice between open and closed models depends on specific operational needs, budget constraints, and long-term strategic goals.
Open-source models are likely to gain more adoption in the private sector, while closed-source models may be preferred in the public sector for security reasons.
Both open and closed models are constantly evolving and improving, with the potential for leapfrogging in performance capabilities.
Chapters
00:00 Introduction: Exploring the Difference Between Open-Source and Closed-Source Language Models
03:02 Understanding Language Models: Components and Functionality
08:07 The Benefits and Drawbacks of Open Source Language Models
25:59 Arguments for Leveraging Closed Source Models in the Defense and Intelligence Space
30:03 Conclusion: Considerations for Choosing Between Open and Closed ModelsThank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
Summary
In this episode of Wild Dog AI, Indago Founder and CEO, Buddy Jericho, dives into the differences between keyword searches and prompting using language models, highlighting the evolving role of language models in open-source research. It also emphasizes the importance of integrating keyword searches and language models for more effective research outcomes.
Takeaways
You'll leave the podcast gaining an understanding of keyword searches and prompting using language models. How the two are distinct skills that require different training and are not in competition with one another.
As a matter of fact, integrating keyword searches and language models can significantly enhance the effectiveness of open-source research, providing more powerful capabilities when used together.
The evolving technology of language models is here to stay and will continue to improve, offering greater convenience and utility for research professionals.
Chapters
00:00 Introduction to Open Source Research and Language Models
01:44 The Difference Between Keyword Searches and Language Models
08:45 Engaging Language Models as an AI Copilot
22:59 The Future of Open Source Research: Embracing Language Models
25:45 RecapThank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead. -
Welcome to the Wild Dog Podcast
Dive deep into the world of intelligence and security with the Wild Dog Podcast, your premier source for insights into the latest trends and technologies shaping the future of intelligence operations. Each episode features discussions with leading experts like this one featuring Heather Perez and Jesse Moore, who bring their wealth of knowledge and experience to the table.
What We Discuss in This Episode:
The role and impact of generative AI in intelligence and security.
Practical tips for integrating new technologies into traditional workflows.
The importance of validating information and the human element in tech-driven environments. Strategies for enhancing privacy and security in the deployment of new tech.
Blog Posts Discussed in This Episode:
-Revolutionizing Intel Reporting: The Pivotal Role of Generative AI (https://indagotech.com/blog/blog-revo...)
-Elevating Intelligence Reporting: The Revolutionary Role of Generative AI (https://indagotech.com/blog/elevating...)Thank you for tuning into the Wild Dog AI podcast. Stay connected with the frontier of intelligence and security by subscribing to our podcast on your favorite platform. Engage with us on LinkedIn HERE, and join the conversation by sharing your thoughts and questions. Together, let's lead the pack in understanding and leveraging the power of generative AI.
For more insights and episodes, visit us at Wild Dog AI Podcast. Continue the journey of innovation with us-because in the race against adversarial technologies, every insight counts.
Interested in seeing Indago in action? Visit us at indagotech.com and sign up for a free demo today. Discover how our platform can transform your intelligence report process.
Don't forget to follow us, rate us, and leave a review. Your feedback helps us grow and improve, bringing you more of the content you need to stay ahead.