Afleveringen

  • 🔒 How do we secure software in an era of open-source dominance and AI-driven threats?
    On this episode of Tech Transforms, Scott Orton and Mike Mehlberg join Carolyn Ford to explore the future of secure software development. They tackle:
    ✅ The Power of SBOMs – Why software transparency is a game-changer (and why some companies still resist it)
    ✅ Open-Source Risks – How attackers exploit vulnerabilities in widely used packages and what to do about it
    ✅ Rust Programming Language – The case for transitioning to a more secure, memory-safe coding environment
    ✅ AI in Software Development – How AI is transforming cybersecurity, from automated vulnerability detection to mitigating risks before they happen
    ✅ Cross-Domain Solutions – The push toward hygienic networks and a proactive approach to security in classified environments
    They also discuss real-world security threats, including a recent open-source software attack that nearly went undetected. Plus, Scott shares his vision for AI inspired by non-human intelligence (think squids and insects! 🦑).
    🔹 If you're in software development, cybersecurity, or tech leadership, this episode is a must-listen!
    📲 Connect with Scott and Mike on LinkedIn for more insights.
    🎧 Listen now and stay ahead in the world of secure software and cyber defense!

  • 💡 Is blockchain just for crypto bros, or is it the future of financial security?
    In this episode, Carolyn Ford welcomes Don Spies, CEO of Outrider Analytics, to unravel the complexities of blockchain intelligence. They discuss how digital assets are transforming crime fighting and how law enforcement agencies use blockchain analysis to track illicit activity—from fraud and ransomware to cyber warfare and terrorist financing.
    Pop culture meets cybersecurity as Don speculates how blockchain could change classic crime stories like Ocean’s 11 and shares his vision for a blockchain-themed thriller (Encrypted Justice, anyone?).
    Key Topics Covered:
    ✔️ Blockchain 101 – Explained like you’re in eighth grade
    ✔️ How criminals leverage digital assets & how investigators track them
    ✔️ Real-world cases of financial crime solved through blockchain
    ✔️ Common misconceptions about cryptocurrency & security risks
    ✔️ The role of government regulation in shaping the future of digital assets
    Listen now to discover how blockchain is reshaping global security!
    🔗 Learn more about Don Spies and Outrider Analytics (https://www.linkedin.com/in/don-spies/)

  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • Explore the transformative potential of AI in this compelling conversation with retired Lieutenant General Michael Groen, former director of the Joint Artificial Intelligence Center (JAIC). General Groen discusses how AI is reshaping industries, national security, and human potential. He delves into the challenges of fostering innovation in large organizations, the necessity of cultural shifts, and the critical role of leadership in navigating this technological revolution. Whether you’re an AI enthusiast, a tech skeptic, or just curious about the future, this episode will inspire you to imagine the possibilities.

  • How do we empower warfighters with real-time, secure communication in the most challenging operational environments? In this episode of Tech Transforms, Carolyn sits down with Dan O'Donohue, VP of Secure Communications at Owl Cyber Defense and retired U.S. Marine Corps Lieutenant General, to explore the cutting-edge of tactical network technologies. With firsthand experience leading Marine Forces Cyber and Joint Force Development, Dan discusses the evolution of secure data exchange, the integration of AI, and the significance of adaptability in disrupted environments.
    Discover how secure communication networks drive operational success, how cross-domain solutions enhance coalition interoperability, and why a modernized, data-driven military is essential in today’s battlespace. Whether you're a tech enthusiast or a leader seeking insights on the future of tactical networks, this episode will transform the way you think about secure communications.
    Tune in for expert perspectives and actionable takeaways from one of the foremost leaders in the field.

  • In this episode of Tech Transforms, Tracy Bannon, a software architect and researcher at MITRE, dives into the exciting potential and challenges of using generative AI in the Software Development Lifecycle (SDLC). Tracy explores how AI can revolutionize workflows, enhance testing, and even act as a collaborative team member. From addressing human trust issues with AI to building decision-making tools, Tracy shares her journey in navigating the evolving role of generative AI. Tune in for actionable insights and learn how careful integration of AI can unlock innovation while maintaining security and trust.
    Key Topics Covered:
    Integrating generative AI into the SDLC
    Trust and human factors in using AI tools
    Promising applications of AI in software development
    Security considerations and risks of AI-generated code
    Shaping the future of AI in a responsible way
    Sponsor:
    This episode is brought to you by OWL Cyber Defense, leaders in secure data transfer solutions for critical networks.

  • In this episode of Tech Transforms, host Carolyn Ford welcomes back Kris Saling, the Director of Talent Analytics and Data Strategy for the U.S. Army, to discuss her new book, Data-Driven Talent Management. Kris shares how analytics is reshaping recruitment, retention, and employee engagement across both the public sector and private industry, providing leaders with powerful tools to build stronger, more motivated teams. From the value of non-monetary incentives to the evolving role of AI in hiring, Kris reveals practical ways data can transform talent management. Whether you're in marketing, cybersecurity, or beyond, this episode is packed with actionable insights on creating a people-first workplace. Join us for a deep dive into the future of workforce management!

  • Jason Miller is the Executive Editor of Federal News Network and has covered the federal technology space over the course of five Presidential administrations. He brings his wealth of knowledge as he joins Tech Transforms to talk about AI, the top things government agencies are working towards this year and his predictions around FedRAMP changes. Jason also pulls on his decades of experience as he discusses events that changed the nation's approach to cybersecurity and the longstanding need to have data that is better, faster and easier to use.

    Key Topics00:00 AI's impact on texting and cloud's significance.04:17 Federal Enterprise Risk Management in government tech.07:20 AI trends shifting toward real-time application.11:22 2025 and 2027 deadlines for zero trust.13:31 CISOs and CIOs adapting to modern technology.16:45 Frustration with FedRAMP leads to reform efforts.21:39 Applying similar model to expand decision-making.23:37 GSA discussed OSCAL at private industry day.27:55 CISA's role has grown within DHS.30:33 Increased transparency in cybersecurity changed approach significantly.34:17 Reflecting on the 2006 significance of data.39:19 AFCEA events bring together good people.42:53 Fascination with government architecture and dedicated government workers.44:35 Promoting positivity and accountability in government industry.
    Cybersecurity Evolution: Examining Technology's Political Neutrality and AI Commitment Through Administrative ChangesConsistent Focus on Cybersecurity Evolution Across Political Administrations

    Jason expressed a clear conviction that technology issues are largely immune to political fluctuation and are a continuity in government agendas. Reflecting on his experience across five administrations, he noted that the foundational technological discussions, such as cloud adoption, cybersecurity enhancement and overall IT improvement are fundamentally preserved through transitions in political leadership. He highlighted that the drive to enhance government IT is typically powered by the resilience and dedication of public servants, who generally carry on valuable reforms and initiatives regardless of the sitting administration's politics. These individuals are essential to sustaining progress and ensuring that technology remains a key priority for effective governance.

    Federal IT Policies Consistency: "No one comes in and says, I'm against AI, or cloud is bad, move back on premise, or cybersecurity, defund cybersecurity. I think those are the issues that stay the same." — Jason MillerExecutive Orders and AI Adoption

    Addressing the specifics of executive orders, particularly those influencing the implementation and development of artificial intelligence (AI), Jason examined their historical persistence and their potential to shape operational practices in the government sector. He and Mark discussed how the stability of AI-related orders through various administrations is indicative of a broader governmental consensus on the integral role AI holds in modernizing federal operations. Despite changes in leadership, the incoming officials frequently uphold the momentum established by their predecessors when it comes to leveraging AI. Indicating a shared, bipartisan recognition of its strategic importance to the government's future capabilities and efficiencies.

    Cybersecurity Evolution: Zero Trust Principles and Network Security Challenges in Federal AgenciesZero Trust and Cybersecurity Budgeting

    During the podcast, Carolyn and Jason delve into the current trends and expectations for federal cybersecurity advancements, with a particular focus on zero trust architecture. Their discussion acknowledged that agencies are on a tight schedule to meet the guidelines set forth by the Office of Management and Budget, which has highlighted 2025 as the target year for civilian agencies to embrace specific zero trust requirements. While the Department of Defense has until 2027.

    Moving past the traditional perimeter defense model, zero trust principles necessitate an ongoing and multifaceted approach to security, which includes sizable budget implications. Jason underscored the importance of the 2024 fiscal year. Noting it as the first time federal budgets are being crafted with clear delineations for zero trust capabilities. This shift in focus is exemplified by the rollout of endpoint detection and response (EDR) technologies. Vital components in this architecture that ensure rigorous monitoring and real-time responsiveness to cyber threats.

    Understanding the Cybersecurity Evolution

    Jason underscored the complexities of network security as federal entities confront the expanding cybersecurity landscape. Highlighted was the layered approach needed to fortify cybersecurity, starting with IAM. This segment illuminated the government's drive to update antiquated systems with modern identification and credentialing processes to better regulate access control. The discussion spilled into a critical analysis of data layer security, emphasizing the necessity for agencies to marshal their applications and data against unauthorized access. Furthermore, Jason hinted at the broader horizon of security measures, which now includes OT and IoT devices. The intertwining of these technologies with standard IT infrastructure adds layers of complexity for security protocols. The conversation shined a light on the massive task that lies ahead as agencies work to comprehend and safeguard the expanded network perimeters and develop strategies to encapsulate a variety of devices under a comprehensive cybersecurity shell.

    The Evolution of AI in Cybersecurity: "We can take data that was 3 years ago or data over the last 3 years and look for trends that we can then use for our future. I think what they're looking for now is more real time, more immediate, especially if you think about, like, cybersecurity." — Jason MillerInnovations and Challenges in Tech ReportingTimeliness in Problem Reporting

    Jason believes that being proactive is vital when it comes to identifying and addressing potential issues within federal agencies. He highlighted that by the time an oversight report, such as those from the Government Accountability Office or an Inspector General's office, is made public, the concerned agency has likely been aware of the issue and has already taken steps to address it. This underlines the criticality of immediate agency reactions to problems. In the context of these reports, Jason suggested reading the agency's responses first. They provide the most current view of what's happening and the actions taken, often making them more newsworthy than the findings of the report itself.

    ACT-IAC and AFCEA Gatherings Key to Cybersecurity Evolution Dialogue

    Without specifically endorsing any one event, Jason acknowledged the importance of various industry gatherings where government and industry leaders convene to discuss pressing topics. He emphasized the ACT-IAC and the AFCEA events as beneficial arenas that enable him to engage deeply in conversations that can lead to actionable insights and meaningful connections. He also mentioned that these events provide an opportunity to interact with federal agency leaders outside the formal constraints of an office setting. This can lead to more open and candid exchanges of ideas and experiences within the government tech community. The ACT-IAC conferences and AFCEA's branch-specific IT days, according to Jason, yield particularly high-value discussions that contribute to both immediate news items and broader thematic reporting.

    Probing the Cybersecurity EvolutionJason's Insight on Federal Tech Trends

    Jason brings a wealth of knowledge specific to federal government technology trends. He highlights AI as a prevalent topic within current discussions. His emphasis on AI signifies the shift from its former buzzword status to a fundamental tool in federal IT arsenals, especially regarding applications in cybersecurity and immediate data analysis. Jason notes that this mirrors the pattern of past tech trends in the industry, where initial hype evolves into concrete implementations. The conversation underscores the fact that while AI is gaining traction in strategic planning and operations, it is critical to discern genuine AI adoption from mere marketing.

    AI Shift Reflects Cybersecurity Evolution and Predictive Technology Integration in Government Operations

    As the conversation progresses, Jason, Carolyn and Mark explore how the vigorous enthusiasm around AI aligns with patterns observed during the advent of previous technologies. The cycle of tech trends typically begins with a surge of excitement and culminates with the practical integration of technology within government operations. Jason points out that although AI is the topic du jour, the government's drive towards embracing real-time and predictive capabilities of AI is indicative of its elevated role compared to earlier technology hypes. This shift spotlights AI's increasing value in enhancing operational efficiency and decision-making processes across various federal agencies.

    Appreciating Government Employees: “There's so many great people who work for the government who want to do the right thing or trying to do the right thing, that work hard every day, that don't just show up at 9 and leave at 5 and take a 2 hour lunch." — Jason MillerThe FedRAMP Overhaul DebateRethinking FedRAMP

    FedRAMP's reform was a critical topic addressed by Jason, who noted industry-wide eagerness for revising the program's long-standing framework. Not only has the cost of compliance become a pressing issue for businesses aiming to secure their cloud solutions, but the time-consuming journey through the certification labyrinth has compounded their challenges. Advancements in technology and a shift towards better automation capabilities have supported the argument for modernizing FedRAMP. The white paper presented by the General Services Administration responded to such pressures with the goal of making the process more efficient. Jason also mentioned a legislative angle with Representative Connolly's involvement, marking the congressional ear tuned to the private sector's concerns about the program's current state.

    Predicting the Future of FedRAMP

    Moving forward, while discussing federal efforts to enhance cloud security protocols, Jason described the nuances in predicting FedRAMP's evolution. He cited the Department of Defense's actions as a positive development, in which they suggested frameworks for accepting FedRAMP certifications reciprocally, depending on security levels. This reciprocity aims to foster mutual trust and reduce redundancy in security validations. However, Jason exercised caution in providing a timeline by which tangible reforms might materialize for businesses pursuing FedRAMP accreditations. Despite the uncertainties, he recognized automation, specifically via OSCAL, as a potential accelerant for the much-needed reform, bringing about quicker, more cost-effective compliance processes.

    Tracking the Cybersecurity Evolution: From 2006 Data Breach to Contemporary Data Protection StrategiesAnalyzing the Cybersecurity Evolution Post-2006 Veterans Affairs Data Mishandling

    Jason provided context on the evolution of cybersecurity. Drawing from an incident in 2006 when the Veterans Affairs department mishandled tapes containing sensitive data of millions of veterans. This episode, he explained, was an eye-opener, underscoring the importance of data security within the federal government. The aftermath was a pivot towards greater openness about cybersecurity issues. Moving away from a more secretive posture to one where sharing of information became essential for strengthening overall security. What we observe now is a more concerted effort within government circles to collaborate, engage with industry partners, and cultivate a proactive stance on cybersecurity threats, with agencies actively communicating about and learning from security incidents.

    Emphasizing Data Protection

    The conversation highlighted the criticality of data protection as it has become the nucleus of many governmental operations and decision-making processes. Since the intrusion into the Office of Personnel Management's records, there has been a palpable shift, gearing towards more robust data safeguards. Jason pointed out how being well-informed about such dynamics is crucial. Entailing an immersion in various activities such as attending industry events, networking with key players, and thorough analysis of inspector general and Governmental Accountability Office reports. Such proactive engagement helps in staying abreast of the current and emerging landscape of federal technology, especially the methodologies and strategies deployed to protect the troves of sensitive data managed by government entities.

    About Our Guest

    Jason Miller has served as executive editor of Federal News Network since 2008. In this role, he directs the news coverage on all federal issues. He has also produced several news series – among them on whistleblower retaliation at the Small Business Association, the impact of the Technology Modernization Fund and the ever-changing role of agency CIOs.

    Episode LinksFedRAMP Memo ACT-IAC EventAFCEA Events 
  • Can you spot a deepfake? Will AI impact the election? What can we do individually to improve election security? Hillary Coover, one of the hosts of the It’s 5:05! Podcast, and Tracy Bannon join for another So What? episode of Tech Transforms to talk about all things election security. Listen in as the trio discusses cybersecurity stress tests, social engineering, combatting disinformation and much more.

    Key Topics04:21 Preconceived notions make it harder to fake.06:25 AI exacerbates spread of misinformation in elections.11:01 Be cautious and verify information from sources.14:35 Receiving suspicious text messages on multiple phones.18:14 Simulation exercises help plan for potential scenarios.19:39 Various types of tests and simulations explained.23:21 Deliberate disinformation aims to falsify; consider motivation.27:44 India election, deepfakes, many parties, discerning reality.32:04 Seeking out info, voting in person important.34:18 Honest cybersecurity news from trusted source.38:33 Addressing bias in AI models, historic nuance overlooked.39:24 Consider understanding biased election information from generative AI.
    Navigating the Disinformation QuagmireDissecting Misinformation and Disinformation

    Hillary Coover brings attention to the pivotal distinction between misinformation and disinformation. Misinformation is the spread of false information without ill intent, often stemming from misunderstandings or mistakes. On the other hand, disinformation is a more insidious tactic involving the intentional fabrication and propagation of false information, aimed at deceiving the public. Hillary emphasizes that recognizing these differences is vital in order to effectively identify and combat these issues. She also warns about the role of external national entities that try to amplify societal divisions by manipulating online conversations to serve their own geopolitical aims.

    Understanding Disinformation and Misinformation: "Disinformation is is a deliberate attempt to falsify information, whereas misinformation is a little different." — Hillary CooverThe Challenges of Policing Social Media Content

    The episode dives into the complexities of managing content on social media platforms, where Tracy Bannon and Hillary discuss the delicate balance required to combat harmful content without infringing on freedom of speech or accidentally suppressing valuable discourse. As part of this discussion, they mention their intention to revisit and discuss the book "Ministry of the Future," which explores related themes. Suggesting that this novel offers insights that could prove valuable in understanding the intricate challenges of regulating social media. There is a shared concern about the potential for an overly robust censorship approach to hinder the dissemination of truth as much as it limits the spread of falsehoods.

    The Erosion of Face-to-Face Political Dialogue

    The conversation transitions to the broader societal implications of digital dependency. Specifically addressing how the diminishment of community engagement has led individuals to increasingly source news and discourse from digital platforms. This shift towards isolationistic tendencies, amplified by the creation of digital echo chambers, results in a decline of in-person political discussions. As a result, there is growing apprehension about the future of political discourse and community bonds, with Hillary and Tracy reflecting on the contemporary rarity of open, face-to-face political conversations that generations past traditionally engaged in.

    The Shadow of Foreign Influence and Election IntegrityChallenges in India’s Multiparty Electoral System

    In the course of the discussion, the complexity of India's electoral system, with its multitude of political parties, is presented as an example that underlines the difficulty in verifying information. The expansive and diversified political landscape poses a formidable challenge in maintaining the sanctity of the electoral process. The capability of AI to produce deepfakes further amplifies the risks associated with distinguishing genuine content from fabricated misinformation. The podcast conversation indicates that voters, particularly in less urbanized areas with lower digital literacy levels, are especially vulnerable to deceptive content. This magnifies the potential for foreign entities to successfully disseminate propaganda and influence election outcomes.

    Election Integrity and AI: "Misinformation and disinformation, they're not new. The spread of that is certainly not new in the context of elections. But the AI technology is exacerbating the problem, and and we as a society are not keeping up with our adversaries and social media manipulation. Phishing and social engineering attacks enhanced by AI technologies are really, really stressing stressing the system and stressing the election integrity." — Hillary CooverCountering Foreign Disinformation Campaigns in the Digital Age

    With a focus on the discreet yet potent role of foreign intervention in shaping narratives, Hillary spotlights an insidious aspect of contemporary political warfare, the exploitation of media and digital platforms to sway public perception. This influence is not just limited to overt propaganda but extends to subtler forms of manipulation that seed doubt and discord among the electorate. As the podcast discussion suggests, the consequences of such foreign-backed campaigns could be significant, leading to polarization and undermining the foundational principles of democratic debate and decision-making. The potential for these campaigns to carry a vengeful weight in political discourse warrants vigilance and proactive measures to defend against such incursions into informational autonomy.

    Addressing the Impact of Disinformation Through AI's Historical Representation BiasTackling Disinformation: AI Bias and the Misrepresentation of Historical Figures

    The discussion on AI bias steers toward concrete instances where AI struggles, as Tracy brings forth examples that illustrate the inaccuracies that can arise when AI models generate historical figures. Tracy references a recent episode where Google's Gemini model was taken offline after it incorrectly generated images of German soldiers from World War 2 that did not match historical records. Similar errors occurred when the AI produced images of America's Founding Fathers that featured individuals of different racial backgrounds that did not reflect the true historical figures. These errors are attributed not to malicious intent by data scientists but to the data corpus used in training these models. This segment underscores the significant issues that can result from AI systems when they misinterpret or fail to account for historical contexts.

    The Necessity of Addressing AI Bias

    Continuing the conversation, Hillary emphasizes the importance of recognizing and addressing the biases in AI. She advocates for the vital need to understand historical nuances to circumvent such AI missteps. Both Hillary and Tracy discuss how biased news and misinformation can influence public opinion and election outcomes. This brings to light the critical role historical accuracy plays in the dissemination of information. They point out that to prevent biased AI-generated data from misleading the public, a combination of historical education and conscious efforts to identify and address these biases is necessary. The recognition of potential AI bias leads to a deeper discussion about ensuring information accuracy. Particularly with regard to historical facts that could sway voter perception during elections. Tracy and Hillary suggest that addressing these challenges is not just a technological issue but also an educational one. Where society must be taught to critically evaluate AI-generated content.

    The Challenge of Community Scale Versus Online InfluenceCombating Disinformation: The Struggle to Scale Community Engagement Versus Digital Platforms' Reach

    The dialogue acknowledges the difficulty of scaling community engagement in the shadow of digital platforms' expansive reach. Hillary and Tracy delve into the traditional benefits of personal interactions within local communities, which often contribute to more nuanced and direct exchange of ideas. They compare this to the convenience and immediacy of online platforms, which, while enabling widespread dissemination of information, often lack the personal connection and accountability that face-to-face interactions foster. The challenge underscored is how to preserve the essence of community in an age where online presence has become overpowering and sometimes distancing.

    Navigating the Truth in the Digital Age: “Don't get your news from social media. And then another way, like, I just do a gut check for myself. [...] I need to go validate." — Hillary CooverImpact of Misinformation and Deepfakes on Political Discourse

    The episode reiterates the disquieting ease with which political discourse can be manipulated through deepfakes and misinformation. Showcasing the capabilities of AI, Tracy recalls a deepfake scam involving fake professional meetings which led to financial fraud. These examples underscore the potential for significant damage when such technology is applied maliciously. Hillary emphasizes the critical need to approach online information with a keen eye, pondering the origins and credibility of what is presented. Both Tracy and Hillary stress the importance of developing a defensive posture towards unsolicited information. As the blurring lines between authentic and engineered content could have severe repercussions for individual decisions and broader societal issues.

    Stress Testing and Mitigating Disinformation in Election Security StrategiesThe Role of Stress Tests in Election Security

    Hillary and Tracy discuss the importance of conducting stress tests to preemptively identify and mitigate vulnerabilities within election systems. These tests, which include red teaming exercises and white hat hacking, are designed to replicate real-world attacks and assess the systems' responses under duress. By simulating different attack vectors, election officials can understand how their infrastructure holds up against various cybersecurity threats. This information can be used to make necessary improvements to enhance security. The goal of these stress tests is to identify weaknesses before they can be exploited by malicious actors. Thereby ensuring the integrity of the electoral process.

    Mitigating the Impact of Disinformation

    The conversation emphasizes the urgent need for preemptive measures against disinformation, which has grown more sophisticated with the advent of AI and deepfakes. As these technological advancements make discerning the truth increasingly difficult, it becomes even more crucial for election officials to prepare for the inevitable attempts at spreading falsehoods. Through stress tests that incorporate potential disinformation campaigns, officials can evaluate their preparedness and response strategies. Including public communication plans to counteract misinformation. By considering the psychological and social aspects of election interference, they aim to bolster defenses and ensure voters receive accurate information.

    Election Security Concerns: "Other instances are going to happen where criminals are gonna be impersonating legitimate sources to try to suppress voters in that case, or steal credentials, spread malware." — Hillary CooverImportance of Proactive Approaches to Election Safeguarding

    The exchange between Tracy and Hillary reveals a clear consensus on the necessity of proactive strategies for protecting elections. Proactively identifying potential threats and securing electoral systems against known and hypothetical cyber attacks are central to defending democratic processes. By focusing on anticipation and mitigation, rather than simply responding to incidents after the fact, authorities can improve election security and reinforce public trust. This proactive stance is also crucial in dealing with the spread of disinformation, which may be specifically tailored to exploit localized vulnerabilities in the electoral infrastructure.

    Reflecting on the Challenges of Election Security in the Digital Era

    This episode serves as a thorough examination of the challenges posed by digital communication in modern democracies. They delve into the dangers of misinformation and the manipulation of public opinion, highlighting how biases in AI can affect the information that individuals receive. They underscore the importance of stress-testing election systems against digital threats and recognize the complexities inherent to securing contemporary elections. The episode ultimately helps listeners to better grasp the ever-evolving landscape of election security and the continued need for informed, strategic action to safeguard democratic processes.

    About Our Guest

    Hillary Coover is one of the hosts of It’s 5:05! Podcast, covering news from Washington, D.C. Hillary is a national security technology expert and accomplished sales leader currently leading product strategy at G2 Ops, Inc.

    Episode LinksBilly Joel - Turn the Lights Back On Deepfakes and AI: How a 200 Million Scam Highlights the Importance of Cybersecurity VigilanceThe Ministry for the Future: A NovelIt’s 5:05! Podcast
  • Deborah Stephens, the Deputy Chief Information Officer for the United States Patent and Trademark Office (USPTO), “grew up” so to speak in the USPTO. Deborah led the USPTO on its agile journey. As the agency took on its “New Ways of Working, '' by moving people and resources closer to the work, she helped empower employees to build and deploy software. Deborah shares how she guided the agency through this 4-year change journey, gaining buy-in from the organization, which was proved by an engagement rate increase from 75% to 85%. Deborah also talks about what it means to be a HISP, running USPTO as a business that is entirely self-sustaining, and, in honor of Women’s History Month, the women who have inspired her along the way.

    Key Topics05:54 Some embraced digital change, others struggled with it08:53 Most employees were ready for telework10:59 USPTO shifts to agile approach for IT16:41 Gathering feedback led to 10% engagement increase23:50 Customers submit 600,000+ patent and trademark applications yearly26:51 Agency conducts outreach through webinars and trademarks31:06 Customer experience and UX processes are fundamental33:45 USPTO offers different fee structures for entities35:30 USPTO runs efficiently with prioritization and budgeting39:43 Acknowledging strong women, personally and professionally43:21 Seek guidance and practice for success
    Growth in Patent and Trademark RequestsSurge in Applications at USPTO

    Deborah Stephens highlights a significant increase in the number of patent and trademark applications received by the USPTO over the years. This growth, from approximately 350,000 to 400,000 applications in 2012, with numbers continuing to rise, underscores the vibrant culture of innovation and creativity in the United States. The upward trend of applications is a positive sign of the country's ongoing commitment to innovation. However, it also presents logistical challenges for the USPTO. Including the need to process a higher volume of applications efficiently while ensuring the quality of examination does not diminish.

    Transition to New Ways of Working in U.S. Patent and Trademark Office: "And so in around late 2018, 19, we began our, what we referred to as our agile journey. We named it our New Ways of Working, which essentially is an entire USPTO effort. Including our business unit with 12 other business units, moving people and the resources closer to the work. Giving them that empowerment, to build, deliver, deploy software, product services for our business stakeholders, and that's both internally and externally." — Deborah StephensUSPTO is Adapting to Increased Demand

    In response to the growing demand for intellectual property protection, the USPTO has been proactive in seeking ways to maintain and improve service delivery. Deborah discusses the agency's approach to managing the influx of applications, focusing on scalability and efficiency. Despite the challenges posed by the increase in applications, the USPTO's designation as a High Impact Service Provider (HISP) has had minimal impact on its existing customer experience strategy. The agency's foundational commitment to delivering exceptional service to inventors and entrepreneurs remains steadfast. With an emphasis on continuous improvement and the adoption of new strategies to better meet the needs of the U.S. innovation community.

    USPTO's Fee-Funded Model and Fiscal StrategyUSPTO’s Fee-Funded Operations

    Deborah highlights the United States Patent and Trademark Office's (USPTO) operational model, which is uniquely self-sufficient. Relying entirely on fees collected from patent and trademark applications. This model ensures that the USPTO does not use taxpayer dollars, setting it apart from many other government agencies. By directly linking the agency's funding to the services it provides, the USPTO aligns its goals closely with the needs and successes of its primary users: inventors and businesses seeking intellectual property protection. This connection incentivizes the agency to continuously improve its processes and customer service. Additionally, Deborah mentions a tiered fee system that offers different rates for entities of various sizes. From individual inventors to large corporations. This structure is designed to lower barriers for smaller entities and encourage a wider range of innovation.

    USPTO’s Budgetary Discipline and Management

    Facing economic pressures such as inflation, the USPTO's approach to budget management becomes even more pivotal. Deborah discusses the importance of prioritization and strategic decision-making in maintaining the agency's financial health. Despite rising costs, the USPTO strives to keep its budget stable and even reduce it when possible, demonstrating a high level of fiscal responsibility. This is achieved through careful analysis of projects and initiatives, focusing resources on areas that promise the highest impact. The USPTO's disciplined budgetary approach not only ensures its operations are sustainable but also serves as a potential model for other federal agencies. By showcasing how to effectively manage finances in a challenging economic environment, the USPTO underlines the value of strategic planning and prioritization in government fiscal strategy.

    Telework Readiness and Agile Transformation at USPTOUSPTO’s Transition to Telework Prior to COVID-19

    Deborah highlights the USPTO's preparedness for telework well before the COVID-19 pandemic. With a significant portion of the workforce already equipped and familiar with remote working protocols, the USPTO had laid a robust foundation for telework readiness. This foresight into establishing a telework culture not only ensured the continuity of operations during unprecedented times. It also underscored the agency's commitment to leveraging modern work practices. The transition to a fully remote working environment, necessitated by the pandemic, was thus more seamless for the USPTO than for many other organizations. Demonstrating a proactive approach to business continuity planning.

    Introducing Change in Remote Work Environments: "There were every 2 weeks of what we refer to as, lunch and learns. And in the beginning, I was the prime speaker, saying, here's our New Ways of Working. Here's the structure. Here's how we're gonna move our processes, our procedures, and people would join in. And it was all remote. I'd have a big TV like producer kind of studio, and I'd be in front of the blue screen and talking to them about this change at least every 2 weeks, if not, sometimes more." — Deborah StephensAgile Transformation and Cultural Shift at USPTO

    The shift from traditional waterfall methods to agile methodologies marked a significant transformation within the USPTO. Deborah emphasizes that this transition was not merely about changing project management techniques. It involved a deeper cultural shift within the organization. Achieving buy-in from both individuals and teams was crucial to fostering an environment that embraced agility, empowered employees and encouraged rapid deployment of products. Key to this cultural transformation were regular remote meetings and employee engagement surveys. This played a significant role in understanding and enhancing employee satisfaction. The notable increase in engagement levels from 75% to 85% during this period of change illustrates the effectiveness of the USPTO's approach in not only implementing agile methodologies but also in cultivating a culture that is receptive and adaptive to change.

    Tech Landscape and Patent Filing Insights at USPTOUSPTO’s "Fail Fast, Fail Forward" Approach

    Deborah shares the USPTO's dynamic approach to technological innovation, encapsulated in the mantra "fail fast, fail forward." This methodology allows the USPTO to quickly test new ideas and technologies, while learning from any setbacks, and refining their strategies efficiently. By fostering an environment where experimentation is encouraged and failure is seen as a stepping stone to success, the agency ensures that it remains at the forefront of technological advancements. This approach is crucial in a rapidly changing tech landscape, as it enables the USPTO to adapt and innovate continuously. Deborah highlights how this philosophy has led to a more agile and responsive IT infrastructure within the agency. One capable of meeting the demands of modern patent and trademark processing.

    The Value of Mentorship: "I think you need to establish your go-to network of mentors, and don't be afraid to become a mentor." — Deborah StephensEmphasizing Customer Feedback in Patent and Trademark Submissions

    Carolyn brings attention to the importance of customer feedback in the process of patent and trademark submissions at the USPTO. Deborah explains how the agency values the insights gained from customer experiences and actively seeks out feedback to improve services. Through a variety of channels such as webinars, outreach programs and direct communication through customer service teams, the USPTO gathers valuable input from those who navigate the patent and trademark submission processes. This dedication to understanding and addressing the needs and challenges of its customers has led to significant enhancements in the USPTO's support structures. Deborah further discusses educational efforts aimed at demystifying the complexities of the patent filing process. Thereby making it more accessible and navigable for inventors and businesses alike.

    Digital Transformation at USPTOUSPTO’s Move from Paper-Based to Digital Systems

    Deborah played a significant role in transitioning the agency from a paper-based application system to a fully digitized process. This monumental task involved not just the scanning of existing paper documents, but also includes integrating OCR technology to make historical patents searchable and accessible in digital form. Despite the sheer scale and potential logistical challenges of digitizing vast amounts of data, the initiative marked a pivotal moment in the agency's history. This transformation was not without its hurdles. Initial resistance to change was a significant barrier that needed careful navigation. However, through strategic planning and a commitment to modernization, the USPTO successfully overcame these challenges. Leading to a more efficient, accessible and streamlined patent application process.

    Efficient Budget Management at the USPTO: "Being able to maintain our budget or even maybe decrease the overall budget by 1%, but yet inflation going up 8, 9%, we've been able to do that. And it's about prioritization, and that's part of our New Ways of Working." — Deborah Stephens About Our Guest

    Deborah Stephens is the Deputy Chief Information Officer (DCIO) for the United States Patent and Trademark Office (USPTO). She has served at the USPTO for more than 30 years in multiple leadership roles, during which she has worked to improve the automated tools and informational resources that facilitate electronic processing of patent applications. In her current role, Deborah is the principal advisor to the Chief Information Officer (CIO) and responsible for managing day-to-day operations of the Office of the Chief Information Officer (OCIO) with significant oversight on information technology (IT) stabilization and modernization efforts. She guides teams towards continual improvements in IT delivery for maximum value to all stakeholders.

    Episode LinksHigh Impact Service Providers (HISPs)USPTO Fee ScheduleWomen’s History Month Blog
  • As technology rapidly evolves we as a nation need to anticipate the attacks that may come about as a result of that innovation. Travis Rosiek, the Public Sector CTO at Rubrik and former Leader at the Defense Information Systems Agency (DISA), joins Tech Transforms to talk about how the government’s approach to technology and relationship with industry has evolved over the last twenty years. He also discusses compliance, including FedRAMP compliance, managing the vast amount of data that is generated daily across the government and industry, and the importance of the U.S. Government building cyber resilient systems. Catch all this and more on this episode of Tech Transforms.

    Key Topics00:00 Government fielded and tested tech capabilities, explained compliance.05:23 Enhanced security collaboration, compliance, and risk minimization.09:14 Experience in government and commercial capabilities. Innovation.10:12 Commercial companies prioritize profitability over long-term planning.14:38 Challenges in public sector recruiting and retention.18:49 Outsourcing SaaS applications frees up resources. AI evolving, human input remains essential.22:33 Assessing incident response: Operational evaluation, not just compliance.25:57 Vendors and program office face process challenges.29:46 Secure cloud data access: visibility, risks, controls.32:27 Emphasizing need for security in IT systems.36:44 CISOs face challenges in evolving tech landscape.38:11 Support CISOs, recruit and retain talent, accountability.
    Evolving Cybersecurity Practices: A Shift to 'Cloud Smart' StrategiesTravis's Perspective on Cloud Misconceptions

    Travis discusses the early days of cloud adoption, which were often fueled by misconceptions about its benefits. The migration toward cloud computing was commonly believed to be a cost-effective solution that would reduce expenses and simultaneously enhance security. However, he points out that this was not always the case. Many organizations have since realized that the initial cost of moving to the cloud can vary greatly based on specific use cases and applications. This realization has led to a strategic shift toward what Travis refers to as a "cloud smart" approach. Highlighting the need for a more discerning and tailored evaluation of how cloud resources are utilized.

    The Role of Commercial Companies vs. Government in Problem-Solving: "Industry is great about solving problems. You know, driving that capitalism type of culture, building capabilities, selling solutions. And they're quicker to implement, adapt and deploy capabilities where the government is very slow in implementation of these you know, they can figure out the problem." — Travis RosiekThe 'Cloud Smart' Strategic Approach

    Taking a "cloud smart" approach indicates a maturation in the perception of cloud services by government agencies and businesses alike. Rather than a blanket strategy of cloud-first, Travis indicates that there is now a more nuanced consideration of when and how to use cloud services. He underscores the importance of aligning cloud adoption with an organization's unique needs. Including the potential scalability, security and cost implications. This approach suggests a collaborative and informed decision-making process. Recognizing that the cloud offers a variety of solutions, each with different features, advantages and trade-offs that must be carefully weighed against organizational goals and objectives.

    Navigating Cybersecurity Practices in Cloud MigrationThe Balance of Technical and Non-Technical Implications in Cloud Migration

    Travis discusses the intricacies involved in organizational cloud migrations. Emphasizing that these undertakings are not solely about technological transitions but also encompass a variety of non-technical considerations. The shift to cloud-based services goes beyond mere data storage and infrastructure changes. It affects strategic business decisions, financial planning and operational workflows. Necessitating a comprehensive evaluation of both the potential benefits and the challenges. Organizations must be acutely aware of the detailed shared responsibility models that cloud service providers outline, which delineate the security obligations of the provider versus the customer. Understanding these responsibilities helps in effectively managing the risks associated with cloud computing.

    The Importance of Human Oversight in AI: "But you still can't take the human out of the loop." — Travis RosiekThe Demand for Advanced Cybersecurity Practices in Multi-Cloud Environments

    Travis highlights a significant challenge in the cybersecurity landscape, which is the scarcity of skilled professionals equipped to manage and protect complex multi-cloud and hybrid environments. As organizations increasingly adopt a mix of cloud services and on-premises solutions, the demand for cybersecurity practitioners with the necessary expertise to navigate this complexity grows. However, attracting and retaining such talent is difficult due to competitive job markets and the limitations of government pay scales. This is compounded by the extensive skill set required for modern cloud environments, including not only security but also knowledge of cloud architecture, compliance and various cloud-specific technologies. Travis underscores the need for specialized personnel capable of addressing the advanced cybersecurity concerns that arise from this intricate, dynamic infrastructure.

    The Evolution of FedRAMP ComplianceFedRAMP Compliance: A Shared Burden

    Travis sheds light on the evolution of the Federal Risk and Authorization Management Program (FedRAMP), a government-wide program that promotes the adoption of secure cloud services across the federal government by providing a standardized approach to security assessment, authorization and continuous monitoring. While it is often perceived as a costly and time-consuming barrier for vendors seeking to serve government clients, Travis emphasizes that the journey to FedRAMP authorization is not the sole responsibility of vendors. Government sponsors engaged in this process also bear a significant load. This dual burden requires commitment and collaboration from both parties to navigate the complexities involved in achieving FedRAMP compliance.

    Strategic Cybersecurity Practices to Navigate FedRAMP Compliance Challenges

    Travis goes into further detail regarding the collaborative challenges of attaining FedRAMP compliance. On the government side, a sponsor’s role in shepherding vendors through the process can be incredibly taxing due to staffing and resource constraints. Furthermore, the procedural nature of the FedRAMP framework can prove to be a linear and lengthy ordeal for all involved. Travis suggests that greater investment to ease the procedural efforts for government stakeholders could potentially improve the efficiency of the overall process, helping it to mature and ultimately relieving some of the burden for both vendors and government sponsors.

    Addressing Data Volume and Security Risks in Modern Cybersecurity PracticesData Categorization and Classification

    Carolyn highlights the daunting challenge of classifying the vast amounts of data that individuals and organizations are responsible for. Travis acknowledges this burden, especially given the exponential growth of data in today's digital landscape. He underscores that as data multiplies rapidly and spreads across various platforms – from cloud services to mobile devices – accurately categorizing and classifying it becomes more critical yet more difficult. Ensuring the security and proper handling of this data is paramount as mismanagement can lead to significant security breaches and compliance issues.

    Cybersecurity in the Era of Cloud and Mobile Computing: "If you can't answer some of those basic questions on visibility, you're gonna struggle protecting it." — Travis RosiekAdapting Cybersecurity Practices to Combat Data Volume Surge

    Travis points to a report produced by Rubrik Zero Labs that sheds light on the continuous surge in data volume within organizations, often experiencing growth by significant percentages over short periods. This expansion amplifies the challenge of safeguarding critical information. Moreover, the need to provide accurate access control increases in complexity when data resides in a hybrid environment. This includes multiple clouds, on-premise servers, and SaaS applications. The continuous monitoring and protection of data across these diverse and dynamic environments present an ongoing challenge for data security professionals.

    Complexities in Data Access Controls

    Carolyn and Travis discuss the need for visibility in distributed data environments, as knowing what data exists, where it is stored and who has access to it is fundamental to securing it. Travis advocates for the NIST Special Publication 800-160 as an additional resource that can guide organizations toward building cyber resilient systems. Its principles of anticipating, withstanding, recovering and adapting offer a strategic approach to not just responding to cyber threats. It also prepares for and prevents potential data breaches in complex IT and data environments.

    Strategic Alignment of Cybersecurity Practices with Governmental Objectives and Zero Trust PrinciplesAligning Cybersecurity Practices with Governmental Objectives

    When considering the acquisition of technology within government entities, Travis highlights the importance of aligning with governmental objectives. Especially when it pertains to national defense, scalability becomes a paramount factor, as the technology adopted must cater to expansive operations and adhere to rigorous standards of security and efficiency. In the military and defense sectors, technologies must not only serve unique and highly specialized purposes but also be viable on a large scale. Travis notes that achieving this balance often requires a nuanced approach that can accommodate the specific needs of government operations, while also being mindful of the rapidly evolving landscape of technology.

    Cybersecurity and Organizational Resilience: "Having a false sense of security, you know, in anything we build, overly trusting things or having a false sense of security, is probably our Achilles' heel." — Travis RosiekEmphasizing Security Principles and Zero Trust

    Travis underscores the central role of security principles in the process of technology acquisition and he places particular emphasis on the concept of Zero Trust. An approach to cybersecurity that operates on the assumption that breaches are inevitable and thus requires constant verification of all users within an organization's network. Travis argues that adopting a zero trust framework is crucial for government agencies to protect against a vast array of cyber threats. By following this principle, organizations can ensure that their acquisition of technology not only meets current operational demands but is also prepared to withstand the sophisticated and ever-changing tactics of adversaries in cyberspace.

    The ABCs of Technology ImplementationThe Adoption, Buying and Creating Strategy

    Travis reflects on a strategic approach he learned during his tenure at DISA, known as the ABCs. A methodology imparted by then DISA director General Charlie Croom. This strategy prioritizes the use of existing commercial technologies, emphasizing 'adoption' as the primary step. By leveraging commercially available tech, organizations can tap into advanced capabilities and integrate them into their operations swiftly. The 'buy' component encourages the procurement of already fielded technologies or platforms. This may not be commercially created but has been proven in practical governmental applications. Lastly, 'create' is seen as a last resort. Reserved for instances where the needs are so specialized or critical that a bespoke solution is warranted. Often due to unique use cases or strict national security concerns.

    Strategic Balancing of Commercial Speed and Government Foresight in Cybersecurity Practices

    In discussing the rationale behind the ABCs framework, Travis reveals the nuanced balance required in government tech implementations. While commercial entities' speed to deploy novel solutions can address particular gaps, government institutions often play a crucial role in identifying and tackling long-term, complex challenges. Especially in defense, the need to build solutions from the ground up may arise when existing products fail to meet the stringent requirements of security-sensitive operations. Conversely, commercial technology's versatility is a critical asset. This marked a shift from the government's historical tendency to primarily develop its own technology solutions. Travis urges organizations to use this strategic framework to make informed, prudent decisions that consider both immediate needs and long-term strategic objectives.

    About Our Guest

    Travis Rosiek is a highly accomplished cyber security executive with more than 20 years in the industry. He has built and grown cybersecurity companies and led large cybersecurity programs within the U.S. Department of Defense (DoD). His experience spans driving innovation as a cybersecurity leader for global organizations and CISOs, to corporate executive building products and services. His impact has helped lead to successful IPOs (FireEye) and acquisitions (BluVector by Comcast).

    As a Cyber Leader in the U.S. DoD, he has been awarded the Annual Individual Award for Defending the DoD’s Networks. Travis currently serves as the Public Sector CTO at Rubrik helping organizations become more cyber and data resilient. Prior to Rubrik, Travis held several leadership roles including the Chief Technology and Strategy Officer at BluVector, CTO at Tychon, Federal CTO at FireEye, a Principal at Intel Security/McAfee and Leader at the Defense Information Systems Agency (DISA).

    He earned a Certificate from GWU in Executive Leadership and graduated from West Virginia University with Honors while earning multiple Engineering degrees. He also was one of the first of ten students from across the nation to be awarded a scholarship from the DoD/NSA’s in cybersecurity. His pioneering mindset has helped him better secure our nation and commercial critical infrastructure. Additionally, Travis is an invited speaker, author (blogs, journals, books) and has also served on the NSTAC, ICIT Fellow and multiple advisory boards.

    Episode LinksRubrik Zero Labs NIST 800-53NIST 800-160
  • Sebastian Taphanel has spent his life on the cutting edge of technology and innovation. This week on Tech Transforms, Sebastian is sharing tales and lessons learned from his 20 years in DoD Special Ops and intelligence and 20 years implementing sound security engineering practices focused on implementing zero trust and highly resilient environments. Join Sebastian as he recounts his time in Special Forces taking his units out of the dark ages from secure fax communications to setting up an intranet, and how he continued with that innovative spirit through his 40-year career. He also shares his new passion, encouraging the industry to utilize disabled veterans to help fill both the cybersecurity and AI workforce gaps. They, after all, already have a call for the mission.

    Key Topics03:38 ODNI CIO responded quickly with Microsoft Azure.07:03 Protecting data via application container, expanding capabilities.11:01 Zero Trust redrawn cybersecurity model, data-centric approach.13:57 Developing zero trust plan for downstream organizations.18:50 Ensuring security while sharing information and protecting IP.21:35 APIs, containers enable fluid, flexible data access.24:20 Data protection systems allow secure sharing and storage.27:02 Addressing cybersecurity workforce gap and AI need.29:39 In 1998, new commander requests secure WAN.33:49 Applied for certified protection professional, highest security certification.36:28 Passionate about supporting disabled vets in cybersecurity.39:55 Mentoring government employees for cybersecurity and AI/ML.45:32 Using advanced generative AI solutions for copywriting.47:19 Update cybersecurity tools and systems for new threats.49:50 Respect for those dedicated to automation.
    Enhancing Secure Communication and Cloud Environments in Special OpsSpecial Ops Agility: Adapting to Remote Collaboration with Secure Cloud-Based Workspaces

    Sebastian Taphanel’s experience spans twenty years in DOD Special Ops and Intelligence, followed by consulting in security engineering. The focal point of this episode is his role in advancing cybersecurity practices at the ODNI. Particularly emphasizing resilient cloud-based environments.

    Sebastian describes the quick adaptation during the pandemic which led to the rollout of an ad hoc cloud-based workspace to ensure the ODNI's mission could endure despite the workforce being remote. GCC High, or Government Commercial Cloud High as conceived by Microsoft, is revealed as the successor to the initial setup. Providing a more secure platform managed strictly by U.S. persons. The approach highlighted the agility of cloud technology for remote collaboration within federal agencies.

    Cybersecurity in Intelligence Sharing: "Essentially, reciprocity is a process and also a culture of accepting each other's risks. And that's really the bottom line on all that." — Sebastian TaphanelUnfolding the GCC High Environment

    The intricacies of implementing Microsoft Azure and M365 (Office 365) are detailed as Sebastian underlines their pivotal use in creating an intranet with controlled document sharing and editing. These implementations include robust Mobile Device Management. Then a BYOD Mobile Application Management system that protects sensitive data in government and personal devices. Thereby, ensuring operational security and flexibility.

    Special Ops Communication Evolution

    Sebastian advanced from using secure faxes for interstate communication within military units to establishing a multi-state secure WAN. This resulted in a significant leap in communication efficacy for special operations. Sebastian shared the potency of secure, cloud-based tools in streamlining and securing government communications. As well as their inherent adaptability to contemporary operational needs.

    Zero Trust Implementation and Reciprocity in Security Controls: "Reciprocity, in some circles, it's a dirty word. Because everybody wants to do it, but nobody really wants to be first." — Sebastian TaphanelThe Shift to Cybersecurity Training and AI Special Ops to Cyber Ops: Training Disabled Veterans to Bridge the Cybersecurity Workforce Gap

    Sebastian recognizes the increasing importance of cybersecurity expertise in today's digital landscape. He points out the significant gap in the cybersecurity workforce and the untapped potential of disabled veterans who can be trained to meet this demand. This shift towards prioritizing cybersecurity skills reflects the industry's evolution as organizations increasingly rely on digital infrastructure. Thus, creating a fertile ground for cyber threats. By focusing on equipping disabled veterans, who already possess a strong sense of duty and protection, with the necessary technical skills to combat these threats, Sebastian believes that we can build a robust cybersecurity force that benefits not just the veterans but the nation's overall security posture as well.

    Training Disabled Veterans for Cybersecurity and AI

    Building upon his own transition from a military career to cybersecurity, Sebastian is passionate about creating opportunities for disabled veterans in the field. His experience has shown him that these individuals, with their ingrained ethos of national service, can continue their mission through careers in cybersecurity and artificial intelligence. Sebastian advocates for collaborations with major tech companies and training providers to establish programs specifically tailored for veterans. These developmental opportunities can help translate military competencies into civilian technology roles. As AI continues to influence various industry sectors, including cybersecurity, the need for skilled professionals who can leverage AI effectively is critical. By providing appropriate training and mentorship, Sebastian sees disabled veterans playing an integral role in shaping the future of cybersecurity and AI.

    Special Ops Veteran Illuminates Zero Trust as a Data-Centric Security Model and the Strategic Role of AI in CybersecurityZero Trust as a Data-Centric Security Model

    In the evolving landscape of cybersecurity, Sebastian brings to light the concept of zero trust. A framework pivoting away from traditional perimeter security to a data-centric model. He highlights zero trust as a foundational approach, which is shaping the way organizations safeguard their data by assuming no implicit trust, and by verifying every access request as if it originates from an untrusted network. Unlike the historical castle-and-moat defense strategy which relied heavily on securing the perimeters of a network, this paradigm shift focuses on securing the data itself, regardless of its location. Zero trust operates on the fundamental belief that trust is a vulnerability. Thereby, anchoring on the principle that both internal and external threats exist on the network at all times. It necessitates continuous validation of the security posture and privileges for each user and device attempting to access resources on a network.

    Zero Trust as a Data-Centric Security Model: “Zero trust now has essentially redrawn the lines for cybersecurity professionals and IT professionals. And I will say it’s an absolutely data-centric model. Whereas in previous decades, we looked at network centric security models.” — Sebastian TaphanelImplementing Zero Trust in Special Ops

    Zero trust extends beyond theoretical formulations, requiring hands-on execution and strategic coherence. As Sebastian explains, the principle of reciprocity plays a vital role in the context of security authorizations among different agencies. It suggests that the security controls and standards established by one agency should be acknowledged and accepted by another. Thus, avoiding redundant security assessments and facilitating smoother inter-agency cooperation. However, applying such principles in practice has been sporadic across organizations, often hindered by a reluctance to accept shared risks. Driving home the notion that strategic plans must be actionable, Sebastian underscores the critical need to dovetail high-level strategies with ground-level tactical measures. Ensuring these security frameworks are not merely aspirational documents but translate into concrete protective actions.

    Special Ops in Cybersecurity: Harnessing AI and ML for Enhanced Defense Capabilities

    Amidst rapid technological advances, artificial intelligence (AI) and machine learning (ML) are being called upon to bolster cybersecurity operations. Sebastian champions the idea that AI and ML technologies are indispensable tools for cyber professionals who are inundated with massive volumes of data. By synthesizing information and automating responses to security incidents, these technologies augment the human workforce and fill critical gaps in capabilities. The agility of these tools enables a swift and accurate response to emerging threats and anomalies. Allowing organizations to pivot and adapt to the dynamic cyber landscape. For cybersecurity operators, the incorporation of AI and ML translates to strengthened defenses, enriched sense-making capabilities, and enhanced decision making processes. In a field marked by a scarcity of skilled professionals and a deluge of sophisticated cyber threats, the deployment of intelligent systems is no longer a luxury, it is imperative for the preservation of cybersecurity infrastructures.

    Looking Ahead: Collaboration, Reciprocity and AI/ML WorkforceAI/ML as a Cybersecurity Force Multiplier

    Sebastian highlights the untapped potential of artificial intelligence and machine learning (AI/ML) as critical tools that can amplify the capabilities within the cybersecurity realm. As Sebastian provides his insights on the importance of AI/ML, it becomes clear that these technologies will serve as force multipliers, aiding overwhelmed cybersecurity professionals dealing with vast arrays of data. The envisaged role of AI/ML is to streamline sense making processes and facilitate prompt, accurate cyber response actions to threats and vulnerabilities. Sebastian portrays a future where strategic use of AI/ML enables swift and informed decision-making, freeing cybersecurity operatives to focus on critical tasks that require their expertise.

    AI/ML as a Cybersecurity Force Multiplier: “I believe what’s going to be needed is the understanding, a training and culture that accepts AI/ML as an enabler.” — Sebastian TaphanelEmpowering Special Ops Veterans for the Future Cybersecurity and AI/ML Workforce

    Sebastian asserts the urgency to prepare and equip individuals for the cybersecurity and AI/ML workforce. He envisions an actionable plan to invigorate the employment landscape, creating a resilient front in the fight against cyber threats. Sebastian calls for a strategic focus on training and knowledge dissemination, particularly for disabled veterans, to incorporate them into positions where they can continue serving the nation's interests in the digital domain. Recognizing the fast evolving nature of these fields, he stresses the need for a workforce that not only understands current technologies but can also adapt to emerging trends. Ensuring that collective efforts in data protection and cybersecurity are robust and responsive to an ever-changing threat landscape.

    About Our Guest

    Sebastian Taphanel blends a more than 20-year DoD Special Ops and intelligence career with more than 20 years of sound security engineering practices focused on implementing Zero Trust and highly resilient environments through the use of innovative technologies and common sense business practices.

  • The real question is, what doesn’t Dr. Amy Hamilton do? She’s currently the visiting Faculty Chair for the Department of Energy (DOE) at National Defense University and the DOE Senior Advisor for National Cybersecurity Policy and Programs, and has had previous stops in the U.S. Army Reserves, NORAD and U.S. European Command, just to name a few.

    At National Defense University, Amy draws on all of this expertise to educate the workforce on AI and finding the right balance between automation and workforce training. Amy also explores how she teaches her students that cybersecurity has to be more than a 9-5 job, the balance of security vs. convenience, and how it will take the entire country getting on board to make the implementation of cybersecurity best practices truly possible. In this episode, we also dive into the realm of operational technology and the need to look to zero trust as we allow more smart devices into our lives and government ecosystems.

    Key Topics00:00 Importance of training, education and AI integration.06:52 Cybersecurity, AI and building codes challenges.09:47 Nuclear facilities need caution, open labs innovative.11:58 Helping students understand federal government and cybertech.15:37 Cyber college compared to traditional university programs.17:18 National Defense University offers master's degree programs.22:06 Addressing the urgent need to combat intellectual property theft.24:32 Passionate plea for cybersecurity vigilance and dedication.26:40 Using automation to streamline cybersecurity operations and training.32:06 Policy person struggles to tie guidance together.33:02 Collaboration is needed for addressing industry issues.38:25 Rethink security for devices in smart tech.41:16 Choosing sustainability as a guiding principle.43:22 Overcome writing and presenting challenges for success.
    Leveraging AI and Automation for Cyber InnovationEmphasizing Efficiency in the Generation of Abstracts

    Dr. Amy Hamilton underlines the capabilities of artificial intelligence to streamline time-consuming processes, specifically the creation of abstracts. This innovation allows for a transition from mundane, repetitive tasks to pursuits that require a deeper cognitive investment. Therefore, elevating the nature of the workforce's endeavors. Dr. Hamilton's discussion focuses on the practical applications of this technology, and she cites an instance from the National Defense University's annual Cyber Beacon Conference. Here, participants were challenged to distinguish between AI-generated and human-generated abstracts, often finding it challenging to tell them apart. This exercise not only highlighted AI's proficiency but also introduced the workforce to the safe and practical application of this emergent technology.

    How do we use AI in a way that goes from low-value to high-value work? If I'm not doing abstract, what other things could I be doing and spending my brain calories towards? - Dr. Amy HamiltonPreparing the Workforce for Cyber Innovation

    Dr. Hamilton stresses the necessity for workforce education in the context of AI and automation. Aiming for a future where employees are neither intimidated by nor unfamiliar with the advancing technological landscape. She illustrates the Department of Energy's proactive role in integrating AI into its training programs. Thus, ensuring that employees are well-acquainted with both the operational and potential ethical dimensions of AI deployment. Acknowledging the diverse range of operations within the DOE, including nuclear and environmental management, Dr. Hamilton notes that the appropriateness of AI application varies by context. Signifying the department's nuanced approach to the introduction of these technologies. Through education and exposure to use cases within a controlled environment, Dr. Hamilton envisions a workforce that is not only comfortable with AI but can also leverage it to enhance productivity and safety in their respective fields.

    Cyber Innovation and Collaboration in Government EnvironmentsDr. Hamilton's Role at National Defense University

    Amy serves as a crucial beacon for educating Department of Defense personnel on comprehensive government functions. With a focus on the distinct agencies and their interaction within the broader governmental ecosystem, she acts as a conduit, clarifying for her students the intricate dance of interagency collaboration. Grants of knowledge on how certain branches, like the Treasury, interact during cyber events. Or the functions of varied components within the agency, serve to demystify the convoluted nature of interdepartmental cooperation. Her teaching elevates students' comprehension of the interconnected roles and responsibilities that propel our government forward.

    Environment for Cyber Innovation

    At National Defense University, there's a particular distinction made between no-tolerance environments. Such as nuclear facilities, where repetitiveness and extreme scrutiny are valued over experimentation and open science labs that thrive on creativity and incessant innovation. Dr. Amy Hamilton underlines this dichotomy. She established the need for both the rigid reliability of technology in some contexts and the unabated exploration for new horizons in others. These contrasting settings ensure the Department of Energy's multifaceted missions are maneuvered through a lens of both caution and curiosity. Across a breadth of projects from the highly sensitive to the openly experimental.

    Attracting Talent to Federal Government

    The College of Information in Cyberspace, where Amy engages with the bright minds of the defense community, presents an academic path tailored for mid to senior career professionals. With a suite of master's degrees and certificate programs, the college not only imparts education but also fosters an ecosystem ripe for nurturing government leaders of the future. Despite the widespread perception of financial hurdles within government roles compared to private sectors, Dr. Hamilton articulates a potent alternative allure. The mission-driven nature of public service. This inherent value proposition attracts those who yearn to contribute to a greater cause beyond monetary gain, ensuring a continual influx of devotion and expertise within federal ranks.

    So I think there's a huge amount of value of what flexibility of recognizing industry experience in cybersecurity can be very, very useful. But I also think, like, how do we attract people in the federal government when we don't have that kind of financial ability to reward? And I think it's reward by mission. - Dr. Amy HamiltonFostering Diversity and Cyber InnovationCyber Outreach and Advocating Diversity

    Dr. Hamilton touches on the vital role of cyber outreach and advocating for diversity in the field of cybersecurity. She brings up Kennedy Taylor, who is making strides as Miss Maryland by combining her cyber expertise with her platform in beauty pageantry. She engages and educates young people, especially girls, about the significance of cybersecurity. Amy highlights the potential of such outreach efforts to challenge and change the stereotypes associated with cybersecurity professionals. By leveraging the influence of figures like Miss Maryland, there's an opportunity to inspire a diverse new generation of cybersecurity experts who can bring fresh perspectives to tackling the industry's challenges.

    The Need for Cyber Innovation

    Throughout the discussion, Dr. Amy Hamilton stresses the increased frequency and severity of cybersecurity threats that have surfaced recently. Acknowledging that the traditional cybersecurity models are faltering under these new strains. She calls for innovative thinking and proactive measures to be adopted. Amy notes that measures used in the past, such as security through obscurity, no longer suffice due to the complex and interconnected nature of modern technology. This new reality requires the cybersecurity sector to evolve and embrace zero-trust principles among other modern strategies to safeguard against the continually evolving threat landscape.

    How do we correct, just swiftly get around to being able to apply those patches and things that we need to do? And we have to get better out of it because our adversaries are. Our adversaries were taking advantage of this every single day. - Dr. Amy HamiltonAddressing Risk Aversion in Cybersecurity

    In discussing the inherent risk-aversion in human nature, Dr. Hamilton points out that despite this tendency, convenience often trumps caution, leading to increased vulnerabilities. She suggests that the answer is not to shy away from innovation for fear of risks, but rather utilize it to enhance the safety and functionality of technological systems. Dr. Hamilton also highlights the crucial role that industry partnerships play in this context, suggesting that collaboration between government and private sectors is essential in developing effective and robust cybersecurity defenses. By working together, these entities can find the balance between convenience and security, ensuring a safer digital environment for all users.

    Challenges in Implementing Cyber InnovationImportance of User Experience in Cyber Innovation

    Dr. Amy Hamilton brings attention to the crucial role that user experience plays when incorporating automation into the workforce. She contrasts the tedious and often frustrating nature of conventional cybersecurity practices, such as manually sifting through logs, with the potential ease automation can provide. Amy uses the example of e-commerce, where users intuitively navigate online shopping without the need for training to illustrate her point that intuitive design is key to user acceptance of automated systems. By adopting user-friendly automation, employees' tasks can be streamlined allowing them to focus on more complex and engaging aspects of their work.

    And so I think that we need to really realize that user experience is important. - Dr. Amy Hamilton AI and Automation in Everyday Life

    Reflecting on her experience with AI in website design, Amy describes the simplicity and efficiency brought by AI-assisted tools that automatically generate content based on keywords. Thus eliminating the need for extensive technical knowledge in web development. This underscores the tangible benefits of automation for individuals without a background in coding. Moreover, Amy emphasizes the societal shift toward greater reliance on automated systems by referencing Disney World as a model of successful automation integration. The theme park's seamless integration of automated booking systems, fast passes and reservations highlight how well-designed automation can augment the customer experience and efficiency in large-scale operations.

    Partnerships in Cyber Innovation

    The dialogue shifts toward the collaborative effort required to tackle cybersecurity breaches. Dr. Hamilton mentioned the expansive SolarWinds incident as a key example where AI and automation have a role to play. Amy underscores the significance of industry partnerships and a unified national approach for enhancing cybersecurity. The incident illustrates that automated tools and AI are not only about convenience, they are instrumental in swiftly identifying and rectifying vulnerabilities in complex digital systems. By automating these processes, agencies can respond more effectively to cybersecurity threats, underscoring the need for automation that complements and enhances human efforts in maintaining security.

    Educational Technologies

    Amy advocates for the use of educational tools like Khan Academy, which can benefit children by offering a controlled environment for learning. She stresses the importance of early cybersecurity awareness, suggesting that exposure to best practices should align with the first use of digital devices. This early introduction to cybersecurity principles, aided by educational technologies, is vital in preparing the next generation to navigate the expanding digital frontier securely. Automation in education, therefore, serves a dual purpose, streamlining the learning process while simultaneously fostering a culture of digital safety awareness from a young age.

    Executive Orders and Collaboration for Cyber InnovationThe Administration's Challenges in Artificial Intelligence Regulation

    Dr. Amy Hamilton discusses the executive order on artificial intelligence. She acknowledged the inherent challenges of being a government pioneer in regulating groundbreaking technology. She compares the order to earlier attempts at cybersecurity regulation and the long-standing effects those have on policy today. Dr. Hamilton predicts that in hindsight, we may perceive today's orders as early steps in an evolving landscape. Given her past experience at the OMB executive office of the president, she understands the complexity of crafting policy that will need to adapt as technology progresses.

    Collaborative Efforts for Cybersecurity Workforce Development

    Dr. Amy Hamilton underlines the need for collaborative synergy between government and industry to foster a robust cybersecurity workforce. With growing intellectual property theft, especially from China, she stresses that safeguarding proprietary information is not just an industry burden but also a national and allied concern. Dr. Hamilton points out that partnerships with non-profit organizations play a vital role in shaping a national response to cybersecurity challenges. Such alliances are vital for maintaining cybersecurity and counteracting espionage activities that impact not only the US but also its international partners.

    Public Awareness and Cybersecurity Breaches

    Carolyn and Dr. Amy Hamilton echo a mutual frustration over the general public's lack of awareness regarding cybersecurity threats. They underscore the gravity of cybersecurity breaches and the espionage activities that target nations' security and economic well-being. Dr. Hamilton uses historical incidents to illustrate the ongoing battle against cyber threats and the need for heightened public consciousness. The discussion implies that bolstering public awareness and concern is pivotal in the collective effort to enhance national cybersecurity.

    About Our Guest

    Amy S. Hamilton, Ph.D. is the Department of Energy Senior Advisor for National Cybersecurity Policy and Programs. Additionally, she is the Visiting Faculty Chair for the Department of Energy at National Defense University. She served two years as a senior cyber security policy analyst at the Office of Management and Budget, Executive Office of the President. She served in the Michigan Army National Guard as a communications specialist and was commissioned into the U.S. Army Officer Signal Corp, serving on Active Duty and later the U.S. Army Reserves. She has worked at both the U.S. European Command and the U.S. Northern Command & North American Aerospace Defense Command (NORAD) on multiple communications and IT projects.

    She became a certified Project Management Professional through the Project Management Institute in 2007 and earned her Certified Information Security Manager certification in 2011. And she presented “The Secret to Life from a PMP” at TEDxStuttgart in September 2016. She taught Project Management Tools at Colorado Technical University and was a facilitator for the Master’s Degree Program in Project Management for Boston University. She is an award-winning public speaker and has presented in over twenty countries on overcoming adversity, reaching your dreams, cybersecurity, and project management.

    Dr. Hamilton holds a Bachelor of Science (BS) in Geography from Eastern Michigan University, a Master of Science (MS) in Urban Studies from Georgia State University, Master in Computer Science (MSc) from the University of Liverpool, Master Certificate in Project Management (PM) and Chief Information Officer (CIO) from the National Defense University, and completed the U.S. Air University, Air War College. She completed her Doctor of Philosophy (PhD) at Regent University in its Organizational Leadership Program with a dissertation on “Unexpected Virtual Leadership: The Lived Experience of U.S. Government IT and Cybersecurity Leaders transitioning from physical to virtual space for COVID-19.” Amy’s motto is: “A woman who is passionate about project management, public speaking, and shoes.”

    Episode LinksWhite House Executive Order on AIThe Cuckoo’s EggM-23-22 Executive Order
  • Have you heard? Data is the new oil. JR Williamson, Senior Vice President and Chief Information Security Officer at Leidos, is here to explain where data’s value comes from, the data lifecycle and why it is essential for organizations to understand both of those things in order to protect this valuable resource. Join us as JR breaks it all down and also explores the concept he dubbed “risktasity,” which he uses to describe the elasticity of rigor based on risk. As he says, “when risk is high, rigor should be high, but when risk is low, rigor should be low.”

    Key Topics00:00 Migration to the cloud has increased vulnerability.04:50 People want decentralized work, including mobile access.08:14 Shift from application to democratizing access to data.10:53 Identify, protect, and manage sensitive corporate information.13:49 Data life cycle: creation, management, access, evolution.20:10 Computers augmenting humans, making good decisions, insights.23:19 The importance of data in gaining advantage.27:04 Adapting to AI to anticipate and prevent breaches.28:51 Adoption of large language models in technology.33:03 Identity and access management extends beyond authentication.36:33 Leveraging strengths, improving weaknesses in tennis strategy.
    Tracing the Cybersecurity Evolution and Data's AscendancyEvolution of Cybersecurity

    JR provided a snapshot into the past, comparing cybersecurity practices from the 1990s to what we see today. With 37 years of experience, he recalled a time when IT systems were centralized and the attack surfaces were significantly smaller. Contrasting this with the present scenario, he spoke about the current state where the migration to cloud services has expanded the attack surface. JR noted an increase in the complexity of cyber threats due to the widespread distribution of networks. Plus, the need for anytime-anywhere access to data. He stressed the transition from a focus on network security to a data-centric approach, where protecting data wherever it resides has become a paramount concern.

    Data Life Cycle: "So part of understanding, the data itself is the data's life cycle. How does it get created? And how does it get managed? How does it evolve? What is its life cycle cradle to grave? Who needs access to it? And when they need access to it, where do they need access to it? It's part of its evolution. Does it get transformed? And sometimes back to the risktasity model, the data may enter the content life cycle here at some level. But then over its evolution may raise, up higher." — JR WilliamsonThe New Oil: Data

    In the world JR navigates, data is akin to oil. A resource that when refined, can power decisions and create strategic advantages. He passionately elucidated on the essence of data, not just as standalone bits and bytes, but as a precursor to insights that drive informed decisions. Addressing the comparison between data and oil, JR stressed that the real value emerges from what the data is transformed into; actionable insights for decision-making. Whether it's about responding with agility in competitive marketplaces or in the context of national defense, delivering insights at an unmatched speed is where significant triumphs are secured.

    Importance of Data SecurityJR Williamson on Data and "Risktasity"

    JR Williamson stresses the heightened necessity of enforcing security measures that accompany data wherever it resides. As the IT landscape has evolved, the focus has broadened from a traditional, perimeter-based security approach towards more data-centric strategies. He articulates the complexity that comes with managing and safeguarding data in a dispersed environment. Where data no longer resides within the confines of a controlled network but spans across a myriad of locations, endpoints and even devices. This shift has rendered traditional security models somewhat obsolete, necessitating a more nuanced approach that can adapt to the dynamic nature of data.

    The Value of Data in Decision-Making: "The data in and of itself is really not that valuable. Just like oil in and of itself is not that valuable. But what that oil can be transformed into is what's really important, and that's really the concept." — JR Williamson

    Data Security Experiences

    Both Mark and Carolyn resonate with JR's insights, drawing parallels to their own experiences in cybersecurity. Mark appreciates the straightforwardness of JR’s "risktasity" model which advocates for proportional security measures based on the evaluated risk. This principle challenges the one-size-fits-all approach to cybersecurity, fostering a more tailored and efficient allocation of resources. Carolyn, in turn, connects to the conversation with her history of grappling with the intricacies of data classification and control. She acknowledges the tactical significance of understanding which data warrants more stringent protection. Plus, the operational adjustments required to uphold security while enabling access and utility.

    Data Governance and Security StrategiesUnderstanding Data Security and Lifecycle

    JR emphasizes the importance of understanding the data's lifecycle. Acknowledging that comprehensive knowledge about how data is created, managed and ultimately disposed of is a cornerstone of effective cybersecurity. This involves not only recognizing the data's trajectory but also identifying who needs access to it, under what conditions, and how it may evolve or be transformed throughout its lifecycle. By establishing such a deep understanding, JR suggests that it becomes possible to design governance systems that are not only effective in theory, but also practical and integrated into the daily operations of an organization.

    Strategy and Organizational Support

    Transitioning from a theoretical framework to practical execution, JR discusses the necessity of an effective data protection model that can operationalize the overarching strategy. To accomplish this, an organization must develop a structure that aligns with and supports the strategic objectives. JR identifies that existing structures often serve as the most significant barriers when agencies work on implementing new cybersecurity strategies. Organizations must be prepared to confront and renovate legacy systems and management frameworks. This is a challenge that became increasingly evident as organizations rapidly shifted to cloud services to accommodate remote work during the pandemic.

    Insights from Data Security and AI ImpactTransformation of Data into Actionable Insights

    Like oil, data's true value isn't in its raw form. It is in the conversion process, which transforms it into insights for decision-making. He reflects on the progression of data turning into information, which then evolves into knowledge, culminating in actionable insights. Just as the versatility of oil lies in its ability to be refined into various fuels and materials, the potential of data is unlocked when it is analyzed and distilled into insights that inform crucial decisions. JR emphasizes that the effectiveness of insights hinges not just on accuracy. It is also on understanding the context in which these insights are applied. He suggests that these refined insights are close to competitive advantages. They enable quicker and more informed decision making in mission critical environments.

    The Importance of Data Insight in Business: "Getting the insight in and of itself is important. But combining that insight with understanding of the problem we're trying to solve is really where the competitive advantage comes into play." — JR WilliamsonAI's Speed Impact on Cybersecurity and Defense

    JR expresses apprehension regarding artificial intelligence's acceleration and its implications for cybersecurity and defense. This unease stems from AI's capability to operate at a pace vastly superior to human capacity. Such rapid capabilities could lead to a perpetual struggle for cybersecurity professionals, who are tasked with defending against AI-driven attacks that continually outpace their responses. For organizations to not only protect themselves but also remain competitive, JR advocates for the adoption of similar AI technologies. By leveraging advanced tools, organizations can preemptively identify vulnerabilities and secure them before they are exploited by adversaries. He alludes to an emerging arms race in cybersecurity, driven by AI advancements that necessitate a proactive rather than reactive approach to digital threats.

    Shifting Mindset in Data Security and Zero Trust ArchitectureBroader Perspective on Defensive Data Security

    Carolyn and Mark, touching on the complexities of cybersecurity, speculate about a potential paradigm shift. Rather than focusing solely on prevention, they wonder if the strategy might pivot towards containment and control once threats are within the system. JR agrees that in today's vast and interconnected digital environment, absolute prevention is increasingly challenging. Though cybersecurity has traditionally been likened to reinforcing a castle's walls, JR argues that due to the dispersed nature of modern networks and cloud computing, this approach is becoming outdated. Instead, organizations need to be agile and resilient, with security measures embedded within the data and applications themselves, ensuring they can quickly detect, mitigate and recover from breaches.

    Dissecting the Concept of Zero Trust Architecture

    JR expresses discontent with the term "zero trust" due to its implications of offering no trust whatsoever, which would stifle any exchange of information. He advocates for the terms "earned trust" or "managed trust" to more aptly describe the nuanced relationship between users and the systems they interact with. Security architecture, JR illustrates, should not solely rely on verifying users' identities. It has to account for the integrity and security posture of the devices and locations being used to access the data. By meticulously understanding which data are most sensitive and their lifecycles, organizations can ensure that access controls are rigorously applied where necessary. This is based on the type of data, the user's context and the access environment. This nuanced approach is fundamental in constructing a robust and adaptive zero trust architecture that evolves along with the organizational ecosystem.

    About Our Guests

    JR Williamson is accountable for information security strategy, business enablement, governance, risk, cybersecurity operations and classified IT at Leidos. JR is a CISSP and Six Sigma Black Belt. He serves on the Microsoft CSO Council, the Security 50, the Gartner Advisory Board, the Executive Security Action Forum Program Committee, and the DIB Sector Coordinating Council. He is also part of the WashingtonExec CISOs, the Evanta CISO Council, the National Security Agency Enduring Security Framework team, and is the Chairman of the Board of the Internet Security Alliance.

    Episode LinksJR Williamson’s LinkedInThe Billington Cybersecurity SummitThe Expanse Dune: Part 2
  • What will 2024 have in store for technology development and regulation? Our hosts, Carolyn Ford and Mark Senell, sat down with Roger Cressey, Partner at Mountain Wave Ventures, Ross Nodurft, Executive Director of the Alliance for Digital Innovation and Willie Hicks, Public Sector Chief Technologist for Dynatrace, to discuss their 2024 predictions. Discover what the experts think will occur next year in terms of FedRAMP, AI regulation, Zero Trust and user experience.

    Key Topics00:00 Revamping FedRAMP in 2024 leads to changes.06:40 Industry requests FedRAMP High; concerns about changes.08:20 Anticipating challenges but aiming for improvement.11:13 Pushing for reciprocity in government technology solutions.15:15 Ensuring human control in AI military use.19:06 Questioning AI use in defense and civilian sector.25:25 Increased investment in security and product regulation.27:21 Expect more AI news, less legislative involvement.30:30 Observability key for zero trust framework implementation.36:22 Prediction: Citizens will interface with AI technology.37:16 Focus on user experience in government systems.41:03 Election year brings unexpected black swan events.
    2024 Predictions for the Public SectorRevamping of the FedRAMP Program

    Ross predicts that in 2024, FedRAMP will be completely reauthorized based on a pending OMB memo that is expected to be finalized in late 2023. This revamp is intended to streamline and improve the FedRAMP authorization process to facilitate faster adoption of cloud-based solutions in government.

    However, Roger believes the changes could temporarily slow things down as agencies take time to understand the implications of the new FedRAMP structure on their systems and assess risks. This could require investments from industry as well to meet new requirements that emerge.

    FedRAMP 2024: "I think it's going to have a lot of agencies take a hard look at their risk and decide where they want to elevate certain high-valued assets, high-valued systems, high-valued programs, and the authorizations themselves are gonna raise in their level." — Ross NodurftShift From Moderate Baseline to Higher Baseline of Controls

    As part of the FedRAMP reauthorization, Ross expects many agencies will shift their systems from a moderate baseline to a higher baseline of security controls. With more interconnected systems and datasets, agencies will want heightened protections in place.

    Roger concurs that the increased scrutiny on risks coming out of the FedRAMP changes will lead organizations, especially those managing high-value assets, to pursue FedRAMP High authorizations more frequently.

    Increased Demand for a FedRAMP High Environment

    Given the predictions around agencies elevating their security thresholds, Willie asks Ross whether the pipeline of solutions currently pursuing FedRAMP High authorizations could face disruptions from new program requirements.

    Ross believes there will be some temporary slowdowns as changes are absorbed. However, he notes that the goals of the reauthorization are to increase flexibility and accessibility of authorizations. So over time, the new structure aims to accelerate FedRAMP High adoption.

    2024 Predictions: Navigating FedRAMP Changes While Maintaining Industry Momentum

    As Ross highlighted, the intent of the FedRAMP reauthorization is to help industry get solutions to market faster. But in the short-term, there could be some complications as vendors have to realign to new standards and processes.

    Willie notes that companies like Dynatrace have already begun working towards FedRAMP High in anticipation of rising customer demand. But sudden shifts in requirements could impact those efforts, so he hopes there will be considerations for solutions currently undergoing authorizations.

    2024 Predictions on Cybersecurity TrendsZero Trust Framework

    Roger discusses how zero trust architectures are progressing forward in adoption, even though the concept has lost some of its previous buzz. The zero trust memo is still in place, people are budgeting for zero trust and funding is starting to be allocated towards implementation.

    As Willie points out, every agency he works with is developing zero trust strategies and architectures. However, he notes these architectures can be extremely complex, especially when adding in cloud and containerized environments.

    2024 Predictions: Observability Critical for Security in Complex Cloud Environments

    Ross echoes Willie's point that there is an increasing movement towards cloud-based environments. This is driving changes to FedRAMP to accommodate the proliferation of SaaS applications.

    With more enterprise environments leveraging SaaS apps, complexity is being introduced. Ross predicts that to protect, understand and maintain visibility across such complex environments with many different applications, overarching observability will become a necessity.

    Impact of the Shift Towards Cloud-Based Environments and SaaS Applications

    The shift towards cloud-based environments and SaaS applications ties back to the FedRAMP changes and predictions from Ross. As agencies move to the cloud and adopt more SaaS apps, they lose visibility and observability.

    Willie predicts observability will become "connective tissue" across zero trust architectures to provide that much-needed visibility across various pillars like devices, networks and users.

    The Rise of User Experience in Government Systems: "I think we're gonna see more and more, of a focus on user experience because I believe with all the things we're talking about, user experience could be impacted." — Willie HicksImportance of Observability for Visibility and Understanding

    Roger concurs that visibility is crucial for security because "you can't secure what you can't see." He notes that observability and understanding where data is and what apps are doing will become a prerequisite for achieving zero trust.

    The Importance of Data Visibility in Security: "Well, I think it's gonna become table stakes, if you will, when it comes to security, because you can't secure what you can't see." — Roger Cressey

    Carolyn highlights how visibility has been embedded in zero trust frameworks from the beginning. However, Willie predicts its importance will be even more prominent in 2024.

    AI and Technology Innovations2024 Predictions: Navigating AI Promise and Pitfalls in the Public Sector

    Roger highlighted the tremendous upside that AI-enabled customer experience solutions could provide for government agencies in improving efficiency and service delivery. However, he also noted that any negative experiences resulting from these solutions would be heavily scrutinized and amplified. This indicates there may be cautious adoption of AI in government during 2024 as agencies balance potential benefits and risks.

    The Importance of Reciprocity in Government Technology: "I just hope they have the wherewithal and the focus to push the right people in the right parts of both the Department of Defense and to the federal civilian side to think about how reciprocity impacts their availability in the marketplace technology or commercial technology solutions out there." — Ross Nodurft

    Willie predicted there would be carefully orchestrated success stories around AI implementations, supporting Roger's point. This suggests that while innovation will continue, government agencies will likely roll out AI solutions slowly and target opportunities where impact can be demonstrated.

    Increased Investment in Security and Product Innovation

    Roger predicted that defensive cyber capabilities enabled by AI will draw greater attention and interest in 2024. Willie noted that AI is also being used in more advanced cyber attacks. Together, these trends indicate there will be an increased focus on using AI responsibly to enhance security while also defending against malicious uses.

    On the commercial side, Ross predicted venture capital investment into AI will accelerate in 2024, driving constant product updates across language models and other platforms. This rapid product innovation seems likely to widen the gap with public sector adoption.

    2024 Predictions: Balancing AI Progress and Governance in the Public Sector

    While the panelists disagreed on the likelihood of major AI regulations from Congress in 2024, Willie predicted that high-profile incidents involving AI could build pressure for new laws, even if passage takes time. He and Ross suggested implementation of AI guidance for government agencies is more likely in the near term.

    The Future Impacts of AI: "I think that the developers of AI are gonna continue to set the agenda, and the deployers, in other words, all the sectors as well as industry sectors, the developers, the deployers are still gonna be playing catch up." — Roger Cressey

    Roger noted that negative experiences with AI in government would also spur calls for regulation. However, he said acting prematurely without understanding the impacts could pose challenges. Together, these perspectives indicate oversight and governance guardrails for AI will increase but could slow adoption if not balanced thoughtfully.

    2024 Predictions: AI Policy Progress and Global Technology LeadershipPotential Dysfunction in Congress Impacting Regulatory Progress

    Roger points out the significant disagreement between the House and Senate that could prevent Congress from finding common ground on AI regulation in 2024. The divide relates to whether the focus should be on continuing innovation or implementing more safeguards and oversight. Meaningful AI legislation at a national level would require lengthy deliberation and consensus-building that likely won't occur in an election year.

    Potential Motivation for U.S. Innovation by China’s Advancements in AI

    According to Roger, China's rapid advances in AI development and utilization could light a fire under the U.S. administration and Congress to accelerate American innovation in this area. However, the U.S. policy community also wants to ensure AI progresses responsibly. Roger argues China's AI capabilities could be an impetus for shaping U.S. strategy in 2024, balancing both innovation and risk management.

    The Global Race for AI Dominance: "Where China is moving rapidly and creatively on AI development, adoption and deployment will be a jet fuel for motivating the administration and congress to do more regarding how can innovation on the U.S. side regarding AI move quicker." — Roger CresseyIndustry Adaptation to Change2024 Predictions: Navigating Changes to FedRAMP and Industry Adaptation

    Ross discusses some of the challenges the industry may face in adapting to the changes outlined in the anticipated 2023 FedRAMP reauthorization memo. He notes that while the intent of the memo is to streamline and open up the authorization process to allow more applications into the pipeline faster, implementing these changes could initially cause some disruption.

    Ross predicts there may be a "learning curve" as agencies and vendors figure out how the changes impact their specific systems and day-to-day operations. This could temporarily slow things down until the new processes are fully understood. However, Ross expects that after this initial bumpy period, the changes will ultimately enable faster movement of applications through the FedRAMP process.

    The Government’s Aim to Create a Process for a Smoother Transition

    Ross highlights that the government's aim in revising the FedRAMP authorization process is to make it easier for agencies to access and leverage innovative cloud-based technologies. The memo revisions seek to create multiple pathways for obtaining authorizations, rather than just one narrow pipeline that applications must move through.

    Discussing the Future of AI: "We gotta talk about, whether it's AI governance, whether it's innovation in AI, it's AI risks, and really understanding how do we balance all 3 of those in a way while we're still moving forward." — Roger Cressey

    The hope is that these process improvements will pave the way for more small and medium cloud-based software companies to get their products authorized for use in government. This will give agencies more options and flexibility in adopting modern solutions. However, Ross cautions that in the short-term there may be some disruptions as outlined above.

    Predictions for Significant Impact in 2024

    In terms of predictions for 2024, Ross expects that the FedRAMP changes, combined with broader cloud migration efforts underway in government, will lead more agencies to request higher baseline security authorizations. Where they may have been comfortable with a FedRAMP Moderate authorization previously, Ross predicts agencies will now ask vendors for FedRAMP High in more and more cases. This will likely impact software providers who will have to adapt their systems and applications to meet the more stringent security controls.

    About Our GuestsRoss Nodurft

    Ross Nodurft is the Executive Director of the Alliance for Digital Innovation (ADI), a coalition of technology companies focused on bringing commercial, cloud-based solutions to the public sector. ADI focuses on promoting policies that enable IT modernization, cybersecurity, smarter acquisition and workforce development. Prior to joining ADI, Ross spent several years working with industry partners on technology and cybersecurity policy and several years in government, both in the executive and legislative branches, including Chief of the Office of Management and Budgets cyber team in the White House.

    Roger Cressey

    Roger Cressey is a Partner with Mountain Wave Ventures. He previously served as a Senior Vice President at Booz Allen Hamilton, supporting the firm’s cyber security practice in the Middle East. Prior to joining Booz Allen, he was President and Founder of Good Harbor Consulting LLC, a security and risk management consulting firm.

    Mr. Cressey’s government service included senior cyber security and counterterrorism positions in the Clinton and Bush Administrations. At the White House, he served as Chief of Staff of the President’s Critical Infrastructure Protection Board from November 2001 – September 2002. He also served as Deputy for Counterterrorism on the National Security Council staff from November 1999 to November 2001. He was responsible for the coordination and implementation of U.S. counterterrorism policy and managed the U.S. Government's response to multiple terrorism incidents, including the Millennium terror alert, the USS COLE attack, and the September 11th attacks.

    Willie Hicks

    Willie Hicks is the Public Sector Chief Technologist for Dynatrace. Willie has spent over a decade orchestrating solutions for some of the most complex network environments, from cloud to cloud native applications and microservices. He understands tracking and making sense of systems and data that has grown beyond human ability. Working across engineering and product management to ensure continued growth and speed innovation, he has implemented Artificial Intelligence and automation solutions over hundreds of environments to tame and secure their data.

    Episode LinksFedRAMPAlliance for Digital InnovationDoDIIS Worldwide
  • On this special So What? episode we go deeper in to some of the top stories being covered on the It’s 5:05! podcast with It’s 5:05! contributing journalist, Tracy Bannon. How are cybersecurity stress tests battling misinformation and aiding in election security? Is AI contributing to election disinformation? How is the CIA using SpyGPT? Come along as Carolyn and Tracy go beyond the headlines to address all these questions and more.

    Key Topics04:20 Proactive approach needed for software voting security.09:12 Deepfake technology can replicate voices and videos.12:38 Politics focuses on presidential level, ignores others.15:53 Generative AI creates new content from data.17:19 New tool aids intelligence agencies process data.20:13 Bill Gates discusses future AI agents on LinkedIn.25:24 Navigating biases in AI towards democratic values.29:13 CISA promotes continuous learning and holistic approach.30:51 Demystifying and making security approachable for all.33:33 Open source, cybersecurity, diverse professional perspectives discussed.
    Importance of Cybersecurity and Responsible AI UseEmbracing Cybersecurity Measures and Privacy Protections

    In their conversation, Carolyn and Tracy discuss the imperative nature of both individuals and organizations in embracing robust cybersecurity measures. As we live in an era where data breaches and cyber attacks are on the rise, the implementation of effective security protocols is not just a matter of regulatory compliance, but also about safeguarding the privacy and personal information of users. Tracy emphasizes the continuous need for cybersecurity vigilance and education, highlighting that it is a shared responsibility. By making use of resources like the CISA cybersecurity workbook, Carolyn suggests that individuals and businesses can receive guidance on developing a more secure online presence, which is crucial in a digital ecosystem where even the smallest vulnerability can be exploited.

    Addressing Biases in AI to Align With Public Interest and Democratic Values

    Tracy expresses concerns over the biases that can be present in AI systems, which can stem from those who design them or the data they are trained on. Such biases have the potential to impact a vast array of decisions and analyses AI makes, leading to outcomes that may not align with the broad spectrum of public interest and democratic values. An important aspect of responsible AI use is ensuring that these technological systems are created and used in a way that is fair and equitable. This means actively working to identify and correct biases and ensuring transparency in AI operations. Plus, constantly checking that AI applications serve the public good without infringing upon civil liberties or creating divisions within society.

    Demystifying Cybersecurity: "We need that public understanding, building this culture of security for everybody, by everybody. It becomes a shared thing, which should be something that we're teaching our children as soon as they are old enough to touch a device." — Tracy BannonThe Proliferation of Personal AI Use in Everyday Tasks

    The conversation shifts towards the notion of AI agents handling tasks on behalf of humans, a concept both cutting-edge and rife with potential pitfalls. Carolyn and Tracy discuss both the ease and potential risks of entrusting personal tasks to AI. On one hand, these AI agents can simplify life by managing mundane tasks. Optimizing time and resources, and even curating experiences based on an in-depth understanding of personal preferences. Yet, Tracy questions what the trade-off is, considering the amount of personal data that must be shared for AI to become truly "helpful." This gives rise to larger questions related to the surrender of personal agency in decision-making. The erosion of privacy, and the ever-present threat of such tools being exploited for nefarious purposes.

    CISA's Cybersecurity WorkbookEnhancing Accessibility with AI Use: Summarizing Complex Documents through Generative Tools

    Tracy introduces the concept of leveraging generative AI tools such as ChatGPT to summarize lengthy documents. This innovative approach provides a way to digest complex material quickly and efficiently. For instance, users can feed a PDF or a website link into ChatGPT and request a summary which the tool will produce by analyzing the text and presenting the key points. Tracy emphasizes this method as a step toward making dense content like government reports or lengthy executive orders, more accessible. She also transitions to discussing CISA's cybersecurity workbook. Illustrating a movement towards the dissemination of important information in a format that a broader audience can understand and apply, not just tech experts. Tracy appreciates the effort by CISA to create resources that resonate with everyone's level of technical knowledge.

    Comprehensive Guidance for Security Measures

    The comprehensive guide provided by CISA, Tracy notes, is robust in offering detailed strategies for planning and implementing cyber security measures. The workbook does not shy away from diving deep into the assessment of potential cyber risks. It details leading practices that organizations can adopt. Planning for incident response is a highlighted area, acknowledging that security breaches are not a matter of if but when. The workbook thus serves as an invaluable reference for initiating proactive steps to fortify against cyber threats. This level of comprehensive guidance serves not only as a tool for implementing robust security measures. It is also a learning resource that promotes a widespread understanding of best cybersecurity practices.

    Government's AI UsePotential Introduction of Generative AI by the CIA

    Tracy and Carolyn discuss the CIA's plans to potentially introduce generative AI through a program dubbed "SpyGPT." The idea behind this integration is to enable the parsing and understanding of extensive open-source data more efficiently.

    Generative AI, similar in concept to models like ChatGPT, could revolutionize how intelligence agencies handle the vast amounts of data they collect. If implemented, this AI would be able to generate new content based on massive datasets. Providing insights that could be invaluable for intelligence processing. Carolyn raises comparisons to traditional methods of intelligence gathering, noting that such technological advancements could have helped in past events had they been available. In response, Tracy emphasizes the historic struggle of intelligence agencies to rapidly sort through surveillance information. A challenge that tools like SpyGPT could mitigate.

    The Double-Edged Sword of AI Use in Predictive Analysis

    A tool like SpyGPT has the potential to rapidly identify patterns and connections within data. This could lead to quicker and more accurate intelligence assessments. Carolyn points to the use of crowdsourcing information during the Boston Marathon bombing as an example of how rapid data correlation and analysis can be critical in national security efforts. The ability to predict and possibly prevent future threats could be significantly enhanced.

    The Dangers of Internet Era Propaganda: "I can take any idea, and I can generate vast amounts of text in all kinds of tones, from all different kinds of perspectives, and I can make them pretty ideal for Internet era propaganda." — Tracy Bannon

    However, as Tracy notes, the power of such technology is a double-edged sword, raising concerns about privacy, the potential for misuse and ethical implications. The conversation raises the specter of a "Minority Report"-esque future, where predictive technology verges on the invasive. Both Tracy and Carolyn agree on the tremendous responsibilities that come with the implementation of generative AI when it intersects with privacy, civil liberties and security.

    Election SecurityThe Critical Role of AI Use in Election Security Stress Testing

    Stress testing in the context of election security revolves around rigorously probing the voting system to uncover any flaws or weaknesses. This process requires collaboration between various stakeholders, including the manufacturers of voting machines, software developers and cybersecurity experts. Tracy emphasizes the crucial nature of these simulated attacks or real-world scenarios that help reveal potential points of exploitation within the system. Identifying these vulnerabilities well before an election can give officials the necessary time to address and reinforce weak spots. Ensuring the reliability and resilience of the electoral process against cyber threats.

    The AI Use in Unveiling Election System Vulnerabilities

    Tracy discusses the necessity of not just identifying but also openly revealing discovered vulnerabilities within election systems as a means to foster trust among the populace. Transparency in the security measures taken and the clear communication of vulnerabilities found, when managed properly, instill a higher sense of confidence in the electoral system's integrity. This approach also plays a pivotal role in countering misinformation. By proactively conveying the true state of system security and the efforts being taken to remedy issues. It can help to dismantle unfounded claims and skepticism about the election infrastructure from various sectors of society.

    Exploring the Impact of AI Use in Deepfake Technology and Artificial Persona CreationCapabilities of Deepfake Technology and AI-Language Models

    Recent advancements in AI and deepfake technology have brought breathtaking capabilities. Primarily the power to manipulate audio and video content with astounding realism. Tracy emphasizes the profound implications of this tech. Specifically pointing to language models such as "Vall-E," which can simulate a person's voice from just a few seconds of audio input.

    The Rise of Deepfakes: "Imagine what's gonna happen with the deepfake. Take a right? I can take your video. I can take your voice." — Tracy Bannon

    This technology uses sophisticated algorithms to detect nuances in speech patterns. Allowing it to generate new audio that sounds like the targeted individual, effectively putting words into their mouths that they never actually said. This ability extends beyond simple mimicry. It propels the potential for creating audio deepfakes that can be nearly indistinguishable from genuine recordings. Such capabilities raise significant concerns about the reliability of auditory evidence and the ease with which public opinion could be manipulated.

    Creation of Artificial Personas Using AI Tools

    Tracy brings to light the increasingly effortless creation of false personas through AI tools such as ChatGPT, which is an iteration of AI language models capable of generating human-like text. These tools can fabricate compelling narratives and even mimic specific writing styles. It can create non-existent but believable social media profiles or entire personas. Tracy points out how these synthetic entities can be programmed to deliver credible-sounding propaganda, influence political campaigns, or sow discord by spamming internet platforms with targeted misinformation. The creation of these artificial personas signifies a dramatic shift in how information can be disseminated. Posing risks of eroding trust in digital communication and complicating the battle against fake news.

    About Our Guest

    Tracy Bannon is a Senior Principal with MITRE Lab's Advanced Software Innovation Center and a contributor to It’s 5:05! podcast. She is an accomplished software architect, engineer, and DevSecOps advisor having worked across commercial and government clients. She thrives on understanding complex problems and working to deliver mission/business value at the speed. She’s passionate about mentoring and training and enjoys community and knowledge-building with teams, clients, and the next generation. Tracy is a long-time advocate for diversity in technology, helping to narrow the gaps as a mentor, sponsor, volunteer, and friend.

    Episode LinksIt’s 5:05! Unmasking Election Security: How Cybersecurity Stress Tests Battle MisinformationIt’s 5:05! AI Election DisinformationIt's 5:05! SpyGPTExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial IntelligenceAllan Friedman’s Tech Transforms EpisodeIt’s 5:05! More CISA Leadership: Security Planning WorkbookCISA’s Security Planning Workbook
  • As technology rapidly innovates, it is essential we talk about technology policy. What better way to get in the know than to have an expert break it down for us? Meet Ross Nodurft, the Executive Director of the Alliance for Digital Innovation. Ross dives in, explaining the evolution of FedRAMP controls and the recent, giant, AI Executive Order (EO) from the White House. Listen in to find out what this EO means for the government, the industry and the workforce as the U.S. attempts to implement policy ahead of AI innovation.

    Key Topics04:25 Increasing security controls for cloud migration07:51 Discussion about customer feedback and cloud migration.12:17 Encouraging commercial solutions into federal government securely.15:39 Artificial intelligence shaping policy for future technology.16:54 AI EO covers critical infrastructure, AI, data, immigration.22:34 Guidance on AI impact assessment and testing.27:02 AI tools adoption must not be delayed.30:03 Ensure AI technologies have fail-safe mechanisms.32:08 Concern over rapid pace of technological advances.34:29 AI and technology advancing, policy aims control.39:37 Fascinating book on technology and chip history.
    The Future of Government Technology: Shifting to FedRAMP High and Accelerating Cloud AdoptionShift from FedRAMP Moderate to High for Sensitive Workloads

    When FedRAMP was established over a decade ago, the focus was on managing the accreditation of emerging cloud infrastructure providers to support the initial migration of workloads. The baseline standard was FedRAMP Moderate, which addressed a "good amount" of security controls for less risky systems. However, Ross explains that increasing volumes of more sensitive workloads have moved to the cloud over time - including mission-critical systems and personal data. Consequently, agencies want to step up from moderate to the more stringent requirements of FedRAMP High to protect higher-risk systems. This includes only allowing High-cloud services to interact with other High-cloud applications.

    The Evolution of Cloud Computing: "So right now, we're at the point where people are existing in thin clients that have access to targeted applications, but the back end compute power is kept somewhere else. It's just a completely different world that we're in architecturally." — Ross NodurftThe Future of Government Technology: Streamlining FedRAMP for the SaaS-Powered Enterprise

    According to Ross, the COVID-19 pandemic massively accelerated enterprise cloud adoption and consumption of SaaS applications. With the abrupt shift to remote work, organizations rapidly deployed commercial solutions to meet new demands. In the federal government, this hastened the transition from earlier focus on cloud platforms to widespread use of SaaS. Ross argues that FedRAMP has not evolved at pace to address the volume and type of SaaS solutions now prevalent across agencies. There is a need to streamline authorization pathways attuned to this expanding ecosystem of applications relying on standardized baseline security controls.

    High-level Security Controls for Sensitive Data in the CloudAddressing Data Related to Students and Constituents

    Ross states that as agencies move more sensitive workloads to the cloud, they are stepping up security controls from FedRAMP Moderate to FedRAMP High. Sensitive data includes things like personal HR data or data that could impact markets, as with some of the work USDA does. Willie gives the example of the Department of Education or Federal Student Aid, which may have sensitive data on students that could warrant higher security controls when moved to the cloud.

    Ross confirms that is absolutely the case - the trend is for agencies to increase security as they shift more sensitive systems and data to the cloud. Especially with remote work enabled by the pandemic. So agencies with data related to students, constituents, healthcare, financial transactions etc. are deciding to utilize FedRAMP High or tailor Moderate with additional controls when migrating such workloads to ensure proper security and rights protections.

    The Future of Government Technology: Navigating the Tradeoffs Between Cloud Innovation and Data Security

    As Ross explains, FedRAMP High means you can only interact with other cloud applications that are also FedRAMP High. So there is segmentation occurring with more sensitive data and workloads being isolated via stricter security controls. However, he notes it is not a "bull rush" to FedRAMP High. Rather agencies are steadily moving in cases where the sensitivity of the data warrants it.

    Willie then asks about the costs associated with these stricter cloud security authorizations, given even Moderate is expensive. Ross explains there are currently policy discussions underway about making FedRAMP more streamlined and cost-effective so that innovative commercial solutions can still sell to the government without having to completely re-architect their offerings just for these processes. The goal is balancing the accessibility of cloud solutions with appropriate security based on data sensitivity.

    Modernizing Federal Government IT: "We need to stop requiring companies to have their own completely separate over architected environment. We want commercial entities to sell commercially built and designed solutions into the federal government." — Ross NodurftLaying the Groundwork: The AI Executive Order and the Future of Government TechnologyRobust Framework for Future Policy and Legal Development

    Ross states that the AI Executive Order is the biggest and most robust executive order he has seen. He explains that it attempts to get ahead of AI technology development by establishing a framework for future policy and legal development related to AI. Ross elaborates that there will need to be additional regulatory and legal work done, and the order aims to "wrap its arms around" AI enough to build further policy on the initial framework provided.

    According to Ross, the order covers a wide range of topics including AI in critical infrastructure, generative AI, immigration reform to support the AI workforce, and government use of AI. He mentions the order addresses critical infrastructure like pipelines, hospitals, transportation systems and more. It also covers immigration policy changes needed to ensure the U.S. has the talent to advance AI. Additionally, it focuses heavily on government consumption and deployment of AI.

    Mapping the Future of Government TechnologyNavigating the Future of Government Technology

    The AI executive order tasks the Office of Management and Budget (OMB) with developing guidance for federal agencies on the safe and secure adoption of AI. Specifically, Ross states that the order directs the Federal CIO and other administration officials to establish rules that allow government consumption of AI in a way that protects safety and rights. Before writing this guidance, the order specifies that OMB must consider the impacts of AI on safety-critical infrastructure as well as rights like privacy and fairness.

    Ross explains that OMB recently released draft guidance for public comment. He says this draft guidance contains several key components. First, it establishes AI governance requirements, directing every major federal agency to appoint a Chief AI Officer and create an AI council with agency leadership that will oversee adoption. Second, it mandates that agencies take inventory of existing AI use and develop plans detailing how they intend to utilize AI going forward.

    Requirements for Agencies to Appoint a Chief AI Officer

    According to Ross, a primary governance requirement in the OMB draft guidance is that all major agencies assign a Chief AI Officer to spearhead their efforts. Additionally, he notes that the guidance orders agencies to construct AI councils with membership spanning functions like IT, finance, HR and acquisition. Ross specifies that these councils will be led by the Deputy Secretary and Chief AI Officer of each department.

    The Uncertain Future of Government TechnologyCollaboration, Prioritization of Assessments, Compliance, Monitoring and Validation

    Ross highlights the need for collaboration between industry and agencies to address issues like prioritization, timing, specifics of compliance, attestation and who pays for and validates assessments. The order pushes the use of AI but lacks specifics that could slow adoption of widely-used technologies with AI. Ross notes this could introduce friction, slowing productive technologies when faster digital services are demanded. Better defining compliance pathways is needed to avoid nervousness using AI.

    AI Ethics and Regulation: "You've got to run as close to live testing as possible, you've got to have human people factored into the decision-making engines." — Ross Nodurft

    While embracing AI, the order does not detail how to facilitate adoption. Ross says this could cause confusion across agencies. His trade association ADI sees the need to add specifics around governance mechanisms to avoid inconsistencies. The lack of clarity risks friction and slowing AI incorporation, which Ross believes is imperative.

    Balancing Innovation and Responsibility in Emerging TechnologiesDemand for a Digital Environment and the Importance of Observability

    Ross states that there is a quick move towards a digital environment across all services, driven by demand from millennials, Gen X and Gen Z. He emphasizes that everything needs to have an app or digital access now to engage users. Ross then highlights how Dynatrace provides important observability of these new cloud-based architectures, allowing agencies to understand usage, interactions and performance. He argues this is essential to properly managing digital services.

    Ross worries that the new AI executive order guidance lacks specifics around compliance, which risks creating friction in adopting widely-used technologies like Dynatrace that have AI components. He states there is uncertainty whether tools like Dynatrace must be inventoried and assessed under the new policy. If so, there are many open questions around prioritization, timing, specific compliance activities, and who pays associated costs. Ross emphasizes that this uncertainty could hinder cloud adoption without more clarity.

    Responsibility and Control Over the Use of AI Technology

    Ross stresses that while AI technology enables incredible things, we have full control and responsibility over its uses. He states we must consider processes and safeguards that provide oversight and allow intervention over AI systems. Ross argues we cannot afford to deploy AI blindly, but highlights it is in our power to leverage these technologies to benefit humanity with appropriate guardrails.

    Shaping the Future of Government TechnologyThe Future of Government Technology and Managing Change for Emerging Fields

    Ross asserts today there is greater intention around anticipating risks from emerging technology compared to past eras. He advocates for building off switches and review processes that allow understanding and course correction around new innovations like AI. Ross states this considered approach is essential for nanotechnology, quantum computing and other exponentially advancing fields.

    The Influence of Artificial Intelligence in Policy and Legal Development: "But artificial intelligence is now more than ever being built into everything that we do technologically." — Ross Nodurft

    Ross disputes the concern that AI will replace jobs, arguing instead it will shift skills required by humans. He provides examples of comparable historical technology shifts requiring new expertise, like transitioning from horses to locomotives. Ross states AI moves job responsibilities in different directions rather than eliminating careers, necessitating learning new tools and approaches.

    Establishing Processes and Organizational Structures for the Future of Government Technology

    Ross highlights how the AI executive order establishes agency governance bodies to oversee adoption. He details required personnel like Chief AI Officers that must review and approve AI use. Ross states these processes aim to identify risks in using innovations like AI while still encouraging adoption. He argues this organizational oversight is a new paradigm essential for emerging technologies.

    About Our Guest

    Ross Nodurft is the Executive Director of the Alliance for Digital Innovation (ADI), a coalition of technology companies focused on bringing commercial, cloud-based solutions to the public sector. ADI focuses on promoting policies that enable IT modernization, cybersecurity, smarter acquisition and workforce development. Prior to joining ADI, Ross spent several years working with industry partners on technology and cybersecurity policy and several years in government, both in the executive and legislative branches, including Chief of the Office of Management and Budgets cyber team in the White House.

    Episode LinksExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial IntelligenceFedRamp Turkey Gumbo RecipeChip War by Chris Miller
  • Have no fear, your new wingman is here! AI is by your side and ready to help you multiply your abilities. Patrick Johnson, Director of the Workforce Innovation Directorate at the DoD CIO discusses how his team is working to further implement AI ethically and safely in areas such as human capital to expedite finding talent. Patrick also shares his passion for building cyclical pipelines to ensure that talent, and ideas, flow seamlessly between the government and private sector. Join us as we dive further into AI’s benefits and how government and industry can be cyber workforce innovation partners.

    Key Topics02:06 Lag in civilian workforce training upscaling needed.03:19 Balancing talent, training and automation for better security.08:22 Leaders understand AI as a force multiplier.12:15 Our motivations are different; utilizing AI for advancement.15:25 AI used for maintenance, scheduling, monitoring issues. Embracing technology.18:35 Questioning impact of technology on workforce integration.21:45 Knowledge, skills, ability, competency. Task-focused performance. Workforce coding. Qualification program ensures necessary skill sets. Tracking mechanism being developed. Vast department with skill spread.25:26 Real-time data for proactive leadership and action.27:05 Retention strategy includes talent competition and permeability.30:36 Improving marketing for civilian DoD jobs.33:49 It works for all sectors, find talent.40:19 Government employees and veterans bring valuable skills.41:27 Promote supply, train, partner for innovation.45:33 Virtual reality: future of government services and museums.
    The DoD's Cyber WorkforceCyber Workforce Improvement Is Crucial

    Patrick states that the Department of Defense's (DoD's) total cyber workforce, comprising military, civilian and industry partner contractors, is around 225,000 people. He notes that the DoD has the biggest gap in the civilian cyber workforce, which makes up about 75,000 people. According to Patrick, one of the key problems when bringing new cybersecurity technologies online is failing to adequately train the existing workforce on how to use and get value from those technologies.

    Training and Upscaling the Current Cyber Workforce

    Rather than pursuing full re-skilling of employees which can set them back, Patrick advocates for upskilling the current DoD cyber workforce. This involves assessing talent and capability gaps. Then providing the workforce with the necessary training to perform new technologies appropriately. Patrick states that partnering workforce members with automated processes like AI can help them become more effective by highlighting key info and threats.

    The Importance of Training and Upscaling in the Cyber Workforce: "Well, it's great to put new technology on the table. But if you don't take the time to train the workforce you have in the programs or the systems you're bringing online, you lose that effectiveness and you don't really gain the efficiencies or the objectives that you need to be."— Patrick JohnsonAutomation and AIAI Is Seen as a Partnership With the Human Cyber Workforce

    Patrick views AI as a partnership with the human workforce rather than a threat. He emphasizes that AI should be seen as a "wingman or wingperson" that boosts productivity and acts as a force multiplier. Patrick explains that AI excels at rote, tedious tasks allowing the human workforce to focus more on creativity.

    AI Helps With Rote and Tedious Tasks

    According to Patrick, AI is adept at attention-to-detail tasks that would be tedious for a human to manually perform. He provides the example of a cybersecurity analyst or defender whose productivity can be enhanced by AI highlighting anomalies in data that they should pay attention to. This allows them to catch more threats and intrusions coming through their systems.

    The Rise of AI and the Fear of Job Loss: "AI can expedite that and do it really fast. It's about how do you fit in and use the technology that is there. So for individuals that are bent on just being one thing or doing a particular way, it's gonna be a struggle."— Patrick JohnsonAI as a Productivity Multiplier

    Patrick argues that all organizations are understaffed and says AI is like "adding a person and a half" to your existing workforce. In his view, this boosts productivity significantly if the technology is utilized correctly. He believes AI's capabilities in assisting with repetitive tasks allow human workers to focus more on creative problem-solving.

    AI as a Cyber Workforce MultiplierAI Applied in Various Industries With New Jobs and Opportunities

    Patrick explains that AI excels at automating repetitive, detail-oriented tasks, freeing up humans to focus on more creative responsibilities. As AI develops, Patrick believes new industries and opportunities will emerge. He references how industrial automation led to new maintenance jobs. Similarly, current AI advances likely indicate the rise of new industries needing workers to oversee AI systems.

    Harnessing Talent in the Digital Age: "It'll work. If it works for cyber, why wouldn't it work for aviation, or why wouldn't it work for logistics? It's gonna work for just about any approach you wanna take."— Patrick JohnsonCurrent AI Advancements Focused On Non-creative Tasks

    Willie agrees that true general AI with human-level creativity remains a distant prospect. He characterizes current AI as skilled at rote, non-creative work. While AI can simulate creativity by aggregating data, Willie argues it cannot independently demonstrate innovation as humans do. He believes consciousness and creativity constitute scientific frontiers we are far from unlocking in silicon.

    Ethical Use of AI in Western SocietiesEthical Considerations in Western Societies

    Patrick discusses how there is an ethical piece when it comes to AI and its use in Western societies. He notes that the DoD's Chief Digital and Artificial Intelligence Office, and their principal staff assistant, are really looking hard at the ethical use of AI. Patrick contrasts this to some of the department's peer competitors, without naming specific countries, who are not as worried about using AI ethically. He explains that in Western societies that have operated in a prosperous, peaceful way for almost 90 years, there is more concern about ethics with emerging technology like AI.

    U.S. is Known for Creativity and Innovation

    Patrick talks about how one of the strengths of the U.S. as a nation is the focus on creativity, innovation and free thinking. He says these characteristics allow new technologies to fully prosper and reach their potential. Unlike in some other systems where there may be more ulterior motives from the state or ruling party that limit capabilities. Patrick notes that some other countries utilize AI for better understanding their citizens, monitoring people and tracking behaviors without as much ethical concern.

    AI in the Defense DepartmentDoD Exploring AI in Maintenance Schedules and Issue Tracking

    Patrick notes that the Defense Department is currently using AI in enclosed systems to improve maintenance schedules and track issues. He explains that this allows them to leverage AI to expedite when certain maintenance actions need to be taken and monitor problems. While the department is still in the early stages of incorporating AI, Patrick emphasizes they are embracing it for these types of automatable tasks rather than avoiding it due to security concerns.

    Embracing AI While Avoiding Security Risks

    When discussing AI, Patrick acknowledges there are legitimate security worries given the sensitive nature of the Defense Department's systems and environment. However, he states these concerns should not deter the department from bringing AI capabilities to the forefront. Patrick argues the department needs to find ways to ethically and safely integrate AI so it does not pose risks. He mentions this is an area of focus for the Chief Digital and Artificial Intelligence Office.

    The Role of AI in Cybersecurity: "It really is about looking at your talent and your gaps and then giving them the training they need to execute the new technology, appropriately."— Patrick JohnsonAI Used to Automate Human Capital Tasks

    In terms of human capital functions, Patrick highlights how the department is already employing AI to streamline and automate certain talent management processes. For example, he explains they are using AI-enabled systems to expedite applicant-job matching and make hiring more efficient. Additionally, Patrick notes AI is helping align training offerings and certifications to the workforce skill gaps the department needs to fill. He emphasizes these applications demonstrate the promise of AI in automating tedious tasks that would normally take humans much longer to accomplish manually.

    Measuring Impact and Maintaining Cyber Workforce Technology BalanceUsing AI to Measure Program Impact

    Patrick explains that they are using AI to track metrics like attrition rates, vacancy rates, losses and gains. This allows them to do predictive analysis to project future vacancy rates and take proactive action when needed. For example, Patrick can put up real-time data for leadership showing that if no action is taken, vacancy rates could rise from 17% to 37% in two years. This prompts leadership to address gaps proactively before problems become severe. Patrick envisions AI having an even greater impact by identifying talent gaps across the department and giving the services enough lead time to ramp up training programs accordingly.

    Balancing Cyber Workforce and Technology

    To balance workforce and technology, Patrick emphasizes the need to train the current workforce on new systems and technologies rather than expecting them to instantly adapt. He uses the example of implementing Zero Trust security, noting that deploying the technology alone is not enough if the workforce is not properly trained to leverage and maximize it. Patrick believes AI should be viewed as a "wingman" to augment human capabilities rather than replace jobs. Proper AI integration requires change management and culture change around utilizing automation.

    Tracking Skills With 8140 Qualification Program

    The 8140 qualification program tracks skillsets needed to perform critical cyber work roles across the department. By coding the entire military and civilian cyber workforce with work roles rather than just competencies, they gain visibility into the location of talent gaps. Work roles also allow them to incentivize critical positions rapidly. As they collect more workforce data, this program will enable sophisticated predictive analytics to get ahead of future talent and skill deficits.

    About Our Guest

    Mr. Patrick Johnson serves as the Director of the Cyber Workforce Management Directorate in the Office of the Deputy CIO for Resources and Analysis, Department of Defense (DoD) CIO.

    In his role as Director, Mr. Johnson leads a dynamic team responsible for the Directorate’s expansive workforce management portfolio and program development supporting the broader talent management lifecycle for the Department’s cyberspace workforce. Directorate initiatives include the DoD Cyber Workforce Framework (DCWF) expansion, training and education program development (Cyber Scholarship, Cyber Exchange, etc.), Cyber Workforce Management Board (CWMB) facilitation, Cyber Excepted Service (CES) Personnel System, and the 8140 policy series implementation which establish enterprise baseline standards and requirements according to DCWF work role(s). At the OSD level, the Cyber Workforce Directorate's role is to leverage authorities and provide Department stakeholders with policies, programs, and tools to effectively recruit and retain a highly skilled cyberspace workforce.

    Mr. Johnson previously served as the Chief, DoD Cyber Excepted Service where his leadership played a pivotal role in the development and implementation of the Cyber Excepted Service Personnel System, and ultimately mission expansion into today’s Cyber Workforce Directorate.

    Mr. Johnson entered federal service in 2011, following more than 24 years of service in the U.S. Army. Prior to becoming the Director of the DoD CIO Cyber Workforce, Mr. Johnson served in a variety of positions in the Department, rising to his position today from his first role as Intelligence Combat Developer, with the U.S. Army Intelligence and Security Command (INSCOM). In his expansive civil service career Mr. Johnson has also served as Deputy Director, Military Personnel DLA; Cyber Integrator, OSD Personnel and Readiness (P&R); Senior Program Manager (Retention), Deputy Chief of Staff Army G-1.

    Mr. Johnson spent his early career in the U.S. Army, serving as a Military Policeman, Protective Service Agent, Military Police Investigator, and Career Counselor culminating in his role as Special Liaison with U.S. Army Intelligence Support Activity within the Joint Special Operations Command (JSOC).

    Episode LinksTech Transforms Ep. 69 with Jon PelsonGuardians of the Galaxy Vol. 3Star Wars VR Games
  • Meet the man on a mission to make software bill of materials (SBOMs) boring. In this So What? episode, Tracy Bannon and Carolyn Ford sit down with Allan Friedman the Senior Advisor and Strategist at the Cybersecurity and Infrastructure Security Agency (CISA). Allan tells us about how he is working to change how all software on the planet is made and sold, no big deal right? Join us as we dive into the world of SBOMs, xBoMs, and Secure by Design.

    Key Topics03:59 Track open source licenses, establish shared vision.08:47 Discussing US government requirements, diversity in software.12:07 Framework helps organizations with secure software development.13:49 Organizations unaffected, prepare for impending software changes.17:40 Concerns about sharing software with potential security risks.20:59 Concerns about network security and regulatory pushback.24:14 Enhanced security measures save thousands of hours.27:53 Applying AI and data bombs in conversation.32:38 Discusses the importance of SBOM in cybersecurity.36:29 Rewriting global code is a complex task.39:39 At RSA, little focus on secure design.41:53 Organization's need for SBOM, call to action.43:55 Cooking for diverse family, diverse food requirements.
    Challenges and Implementation of SBOMsSelf-Attestation for SBOMs

    Allan Friedman explained that there is currently a self-attestation model for SBOMs, where companies can sign a form stating that they have implemented SBOMs, rather than providing the actual SBOM data. This allows flexibility for organizations that are not yet ready to fully comply. However, it means buyers have to trust the attestation rather than seeing the SBOM details directly.

    Secure Software Development Model Compliance: "The challenge there is turning the framework back into a compliance model. Because, again, at the end of the day, everyone wants to think about things. Right? Understand your risk, but you still need to make that yes or no decision."— Allan Friedman

    Tracy Bannon noted some companies have concerns about sharing their SBOM data with customers, worrying that the customer may not have secure enough practices to properly protect the SBOM. Allan Friedman explained SBOMs do not need to be public - they can be shared privately between supplier and customer. Known unknowns in the SBOM can also help address concerns about revealing proprietary information.

    Debate About the Risk of Sharing SBOMs as a Road Map for Attackers

    Allan Friedman argued that sophisticated attackers likely do not need the SBOM, as they have other ways to analyze and reverse engineer software. Automated attacks also do not leverage SBOMs. He noted defenders actually need the visibility an SBOM provides into components and dependencies. There may be some risk of exposing attack surface, but the benefits seem to outweigh that.

    The Importance of SBOM for Product Security: "If we had this, we had SBOM across our products today, it would save us thousands of hours a year Because whenever the next Log4j comes out, if you have a centralized machine readable, scannable system, It's not that hard." — Allan Friedman

    Allan Friedman noted there has been some lobbyist pushback against SBOM mandates, often coming from trade associations funded by companies already implementing SBOMs. He said while healthy debate is good, many of the lobbyist complaints seem misguided or overblown.

    The Potential Role of AI in Creating SBOMs and Its Implications for Security

    Carolyn Ford asked whether AI could help automate SBOM creation, especially for legacy systems. Tracy Bannon cautioned that AI is not yet at the point where it can reliably generate code or understand large complex contexts. AI may eventually assist, but currently is not ready to take on SBOM tasks. As AI is software, it needs to be secured using the same best practices as other code.

    Tracy Bannon explained SBOM implementation may be harder for organizations with large legacy codebases and multiple complex or siloed systems. However, even newer companies can struggle if they have not built SBOM processes into their SDLC. Allan Friedman noted while costs exist, especially for older systems, SBOMs ultimately save defender time and money.

    Benefits of Better Engineering Processes

    Allan Friedman said some organizations view SBOM mandates positively, as it gives them budget and justification to reengineer antiquated processes. Overall, SBOMs provide incentives and reasons to follow modern secure software practices.

    Tracy Bannon emphasized that any mandated change involves costs, which need to be acknowledged. But driving adoption of SBOMs and secure development practices is still an important improvement goal. Organizations should be supported in this transition.

    Government Requirements and StandardsComplexities of US Government Requirements for Software

    Allan explains that the executive order issued requirements that all software sold to the US government would need to meet certain security practices, like having separate development and build environments and using multi-factor authentication. While these may seem basic, turning the NIST framework into concrete compliance requirements has been challenging. The government pushed for a quick definition of SBOMs, while agencies said it would take months. There's a need to balance the push for progress with the realities of implementing changes across complex legacy systems.

    Open Source License Tracking: "And if you're an organization, you need to track which open source licenses are you using both in your open source and your code because there are strong rules for some of them."— Allan Friedman

    For some parts of the software world, Allan notes that SBOMs are already considered standard practice. Modern developers with continuous integration pipelines can easily generate SBOMs automatically. The challenge is bringing along the organizations still using legacy tools and processes. Widespread adoption will take time. The goal is for SBOMs to become a boring, expected part of software delivery that doesn't require much discussion.

    Timeline and Process Following the Executive Order

    The 2021 cybersecurity executive order mandated the use of SBOMs but didn't define what they were. After pushing for a faster timeline, the government issued a minimum definition of SBOMs within 60 days. NIST then updated their secure software development framework with guidance. The next step is moving from framework to compliance model, with self-attestation as a starting point until more formal requirements are in place across agencies.

    The executive order mandated SBOMs but didn't define them, so the government had to quickly issue a minimum definition of what constitutes an SBOM. This was a challenging process that required balancing perspectives from across government and industry. The public and private sectors need a shared understanding of what SBOMs are as adoption spreads.

    Concerns and SolutionsConcerns From Corporations and Suppliers About Revealing Proprietary Information

    Allan acknowledges there are concerns from some corporations and suppliers that providing an SBOM could reveal proprietary intellectual property or special sauce in their software products. Many organizations want to avoid exposing their competitive advantage or secret methods. Allan says the SBOMs do not need to be public - they can be shared directly and privately with the customer purchasing the software. There are also ways to designate known unknowns or gaps in the SBOM data.

    The Importance of Software Bill of Materials (SBOM): "We're building the plane while we're flying it."— Allan Friedman

    Tracy raises the concern she has heard that requiring companies to share SBOMs with customers could potentially expose their intellectual property if those SBOMs are not properly secured. She notes there have been many high-profile data breaches lately. This means vendors may be wary about sharing an SBOM with a customer if they lack confidence in that customer's data security practices. There needs to be trust between the entities exchanging SBOMs.

    Claims Regarding the Majority of SBOMs Content Not Being Secretive

    In response to concerns about IP exposure, Allan argues that for most large software projects, the bulk of what is contained in an SBOM does not represent core proprietary IP or secret sauce. As an example, he says that just listing common third-party libraries used does not reveal a competitive advantage. So fears may be overblown about SBOMs leaking meaningful intellectual property.

    Given the valid concerns around proprietary code exposure and SBOM generation limitations, Allan advocates for the concept of designating "known unknowns". This would allow software providers to specify areas of the codebase or supply chain that have incomplete SBOM data due to proprietary restrictions or tooling gaps. Known unknowns enable transparency about the boundaries of SBOM coverage.

    Software Supply Chain Security and SBOMsBuffer Overflows and Memory Unsafety in Programming Languages

    Allan Friedman explained that a large percentage of vulnerabilities arise from memory issues. Buffer overflows are a simple example, but there are thousands of variants that allow attackers to execute malicious instructions by tricking a system into accessing attacker-controlled memory regions. This memory unsafety occurs primarily in languages like C and C++ that lack memory safety protections.

    Given the risks from memory unsafety, Friedman discussed CISA's vision of pushing more secure software development through the use of memory-safe languages. Languages like Rust and Go provide memory safety protections that prevent common categories of vulnerabilities. However, rewriting major legacy codebases will take time. CISA is exploring partnerships and incentives to accelerate adoption of memory-safe languages over the long term.

    Group Dealing With a Large ADA Code Base and Other Languages

    Tracy Bannon noted that some organizations, unfortunately, cut budgets by avoiding automated testing in favor of manual testing. But requirements like SBOMs remove excuses to not invest in automated processes and improved engineering.

    Tracy Bannon mentioned there are ongoing conversations with the Department of Defense around extending the SBOM concept to data through "data bombs." While AI and algorithms are software, data artifacts like model cards and data cards also need supply chain transparency.

    Bannon highlighted that she works with a group managing a complex codebase including not only a substantial amount of ADA, but 13 other languages layered onto the system. This exemplifies the challenges of legacy systems.

    Friedman explained that CISA's director and CISO have been pushing the secure by design initiative to make software more inherently secure out of the box. He provided examples like moving away from hardening guides and instead selling software locked down, with optional integration instructions.

    About Our Guest

    Allan Friedman is a Senior Advisor and Strategist at the Cybersecurity and Infrastructure Security Agency (CISA). He coordinates the global cross-sector community efforts around software bill of materials (SBOM). He was previously the Director of Cybersecurity Initiatives at NTIA, leading pioneering work on vulnerability disclosure, SBOM, and other security topics. Prior to joining the Federal government, Friedman spent over a decade as a noted information security and technology policy scholar at Harvard’s Computer Science Department, the Brookings Institution, and George Washington University’s Engineering School. He is the co-author of the popular text Cybersecurity and Cyberwar: What Everyone Needs to Know, has a C.S. degree from Swarthmore College, and a Ph.D. from Harvard University.

    Episode LinksExecutive Order on Improving the Nation’s CybersecurityCISA’s Software Bill of Materials (SBOM) PageCISA Blog from Christine Lai and Dr. Jonathan SpringAllspice Dram in Cocktails
  • In the final, crossover episode of our three-part Halloween series, Eric Monterastelli, Public Sector SE at Delinea, Founder, Crew Chief of Gran Touring Motorsports and Host of the Break/Fix Podcast, joins Carolyn Ford and Tracy Bannon to discuss the scary reality of car security. Is your car spying on you? Can a nefarious actor take over your car? Does your car know your deep personal data like your immigration status, race and more? Hint: It can and it does.

    Key Topics00:02:05 Technology advances put vehicles at risk.00:06:25 Hijacked Jeep's wireless signal, turning it off.00:07:35 Chrysler systems hacked due to digital admission.00:10:47 New EV platforms streamline technology for efficiency.00:15:13 Disconnect, purge and be careful: data can be accessed.00:18:58 Using TrueCar, author obtained personal information illegally.00:21:54 Pre-OBD2 Mercedes is OBD1.00:25:12 Mozilla uncovers alarming auto data collection.00:28:29 Future vehicles will have integrated alcohol-detection systems.00:32:48 Routers, cars can be hacked, collect data.00:35:42 Read your vehicle's owner's manual for instructions.00:36:55 Speak to rental clerk about removing data.
    The Intersection of Cybersecurity, Car Security and the Ghostbusters MissionGhostbusters Mission: Car Security & Car Hacking

    Eric Monterastelli talks about how cars have evolved to include more computing technology, which opens them up to potential attacks. He gives the example of a Jeep that was hacked to shut off while driving, demonstrating the real dangers.

    Tracy Bannon contrasts U.S. car manufacturers that use many third-party components versus Tesla's more integrated system. She argues Tesla's approach may lend itself to more car security. The hosts explore different potential attack vectors into vehicles, like Bluetooth connections.

    Mozilla Participants Share Automotive InfoSec Insights

    Eric Monterastelli shares findings from a Mozilla report about the wide range of deep personal data that can be collected from cars. Including things like facial expressions, weight, health information and more. The hosts are alarmed by the privacy implications.

    Tracy Bannon advocates that car manufacturers need to make cybersecurity a priority alongside traditional safety. She indicates cars are data centers on wheels, collecting information that gets sent back to big cloud data centers. They emphasize the need for vigilance from car owners about what information they allow their vehicles to collect.

    Concerns About Data Collection in Modern VehiclesModern Car Security: Braking, Speed and Steering Patterns

    Eric discusses the extensive data that is now collected by modern vehicles, especially EVs. He notes that information is gathered on things like stopping distances, brake pressure applied, vehicle speed and overall driving habits. This data is no different than the type of driver performance analysis done in race cars. Automakers are collecting real-world usage data from customer vehicles to analyze driving patterns and vehicle responses. Tracy adds that the average new vehicle contains over 100 different computers and millions of lines of code that are all networked together. This networked data covers areas like powertrain functions, safety features and infotainment systems. All of this interconnected data presents opportunities for tracking very detailed driving behaviors.

    Privacy Risks in Driving: Collecting Personal Data and Concerns

    Eric cites a concerning report that modern vehicles can potentially collect extremely sensitive personal data simply through normal driving. Including information on immigration status, race, facial expressions, weight, health conditions and even genetic data. He explains that optical facial recognition software could be applied to cameras already present in many vehicles. Other data like weight and health metrics can be gathered from sensors in seats or wearable devices synced to the vehicle. The interconnected nature of modern vehicle computers and far-reaching data collection enables mining of very private user information that goes well beyond basic driving statistics. Carolyn reacts with disbelief at the potential extent of personal data gathering described.

    Car Security Comparisons Between Traditional Manufacturers and TeslaChallenges in U.S. Car Manufacturing Component Compatibility

    Tracy explained that traditional U.S. car manufacturers have said they use components from hundreds of different distributors and providers. These components were not necessarily created to work together, unlike the approach taken by Tesla. Since traditional manufacturers are buying piece A and knitting it together with piece B, piece C and piece D, there can be integration challenges. The components may not align well since they were not designed under the same umbrella with a holistic approach.

    Comparing Tesla's Integrated Approach to Enhance Car Security

    Tracy contrasted the traditional manufacturers' approach with Tesla, which has created everything under one umbrella. Tesla told any component providers what the requirements were and how the components needed to align to what Tesla needed. This holistic approach within Tesla results in more seamlessly integrated and likely more secure vehicles compared to cobbling together components from many different organizations.

    Tesla's Privacy Concerns: "But Tesla, there's been reports and there's been investigations showing that they can turn on the cameras inside the car and see what you're doing. They've been spying on people. There's been all sorts of allegations that have been thrown out there." — Eric MonterastelliCombining Car Parts from Various Sources Raises Security Risks

    Eric and Tracy discussed how having disparate systems talking over a common bus and language can introduce vulnerabilities. While a proprietary closed system like Tesla's may have risks if it is fully hacked. Assembling many components from different providers can also have downsides. There are more potential holes or vulnerabilities when piecing together parts from various organizations. Compared to having everything designed and built under one umbrella.

    Integration of Systems in Modern CarsUnified Mainframe Powers Modern Electric Vehicles, Replacing Separate Components

    Eric discusses how newer electric vehicles like Teslas, Ford Mach-Es, and Porsches have a single mainframe that controls and interacts with all the components of the vehicle. In contrast, older cars had separate systems for the engine/drivetrain and infotainment that did not necessarily communicate with each other. For example, in a 2000s Chrysler, the infotainment system running the radio was separate from the encrypted Bosch system controlling the engine. Integrating all these components into one mainframe makes the new electric vehicles more convenient but also introduces potential vulnerabilities.

    Single Computer Control and Car Security Vulnerabilities Explored

    Tracy elaborates that the average new car today has over 100 different embedded computers. plus modules networked together and communicating via a CAN bus system. So there is one central computer that can interact with the engine, transmission, safety systems and infotainment features. While this integration is designed for efficiency and effectiveness of the software systems, it also means one access point can potentially control multiple components of the car. This is different from older cars where systems were more isolated from each other. The interconnectedness makes modern vehicles potentially more susceptible to cyber attacks.

    The Vulnerabilities of Modern Vehicles: "For me, that's a scary reality. And it actually has shied me away from buying the newest of the new cars even though there are some really exciting things out there because what am I opening myself up to, if I buy a Ford Mach-E or a Tesla Model 3 or something else." — Eric MonterastelliVulnerabilities and Risks in Modern CarsIntegrating ML and AI into Cars through Computing Advancements

    Eric discussed how cars have evolved significantly in engineering since the early 1900s. He highlighted that around 2000, more powerful computing technology like ML and AI computers were integrated into vehicles to make decisions about engine performance and interact with various systems. This advancement allowed for additional "creature comforts" in cars. But also opened them up to potential attacks and vulnerabilities that older cars did not face.

    Future of DUI Prevention: "It's gonna become standard issue like power windows and remote locks and things like that where you're not even gonna be able to drive and operate a vehicle if it senses that you're in any way inebriated or under the influence." — Eric MonterastelliModern Vehicles' Complexity Heightens Vulnerabilities and Security Risks

    Eric further acknowledged that consolidating disparate systems into one mega computer, while making things more convenient, also introduced vulnerabilities. With everything controlled by one mainframe, the attack surface is larger. He contrasted modern vehicles to cars from the mid-2000s, where engines were still separate from entertainment systems. Now they are fully integrated, which provides more connectivity but less isolation among components.

    The Electric Vehicle Boom and Its Impact on Digital Systems

    According to Eric, the rise of electric vehicles has led to even more potential issues, as they rely even more heavily on electrical systems and digital connectivity like over-the-air updates. Features that make EVs exciting also make them more susceptible to cyber threats compared to traditional internal combustion cars. The reality that EVs open owners up to unknown risks has made Eric shy away from the newest vehicles.

    Differences in Car Security Among Manufacturers Contrasting Tesla and Porsche Systems: Unified Communication vs. Proprietary Approach

    Eric compared Tesla's interconnected systems to Porsche's components from various suppliers like Bosch. He said Tesla has full access to proprietary systems through the air, while Porsche uses a CAN bus for disparate systems to communicate. The closed nature of Tesla's system makes it completely open to them.

    Tracy added more context, mentioning Porsche is connected to VW and Audi, who work with Bosch for many electromechanical parts like sensors and multifunction interfaces. She reiterated that these disparate systems in Porsche communicate via a CAN bus system.

    Eric acknowledged Tracy's point that both brands use a CAN bus for the back-end electrical system. However, he still sees more risks with Tesla having full access to a closed proprietary system through the air versus Porsche's various supplier components that don't directly communicate beyond the CAN bus.

    Risks of Personal Data Storage in CarsStoring Personal Data in Car Infotainment Beyond Phone Disconnect

    Tracy explained that even after disconnecting your phone from a car's infotainment system, personal data like contacts and GPS history can remain cached in the system. She warned that simply pressing "disconnect" does not purge the infotainment system of your data. Eric added that unless you fully wipe the system, your data remains stored even after trading in or selling your car. He gave the example of someone pulling a used head unit from a junkyard car, and upon powering it up having full access to the previous owner's contacts and address history.

    Cyber Security Perspective on Data Collection in Cars: "They can collect deep personal data such as sexual activity, immigration status, race, facial expressions, weight, health, and genetic information while you're driving." — Eric MonterastelliCar Disposal Doesn't Ensure Personal Data Erasure from Head Unit

    Tracy shared that her husband takes extensive precautions to prevent others from accessing personal data, such as degaussing old hard drives before disposal. She explained these same precautions should be applied to cars, since simply trading in or scrapping a car does not mean personal data is removed from components like the infotainment system. Eric affirmed this concern, stating that short of an EMP blast, data remains recoverable from the car's memory chips even after the car changes owners. He advised thoroughly wiping car systems before sale to prevent exposing personal information.

    About Our Guest

    Eric Monterastelli is the Public Sector SE at Delinea, Founder and Crew Chief of Gran Touring Motorsports and Host of the Break/Fix Podcast. He has more than 18 years of experience in information technology, specializing in systems engineering, virtualization and software development. His previous stops include Dynatrace, BAE Systems, Raytheon, the Department of Defense, LogRhythm and Symantec, among others.

    Episode LinksBreak/Fix Podcast Andy Pilgrim Episode of Break/Fix PodcastMozilla Article on Car PrivacyTech Transforms Halloween Series Episode 1Tech Transforms Halloween Series Episode 2
  • In the second episode of our 3-part Halloween series, Grant Schneider, Senior Director of Cybersecurity Services at Venable and former federal CISO, discusses the frightening implications of insider threats, how we are protecting critical infrastructure, and what it was like working on cybersecurity in the White House under both President Obama and President Trump.

    Key Topics00:03:59 Increased consequences led to rise of cybersecurity00:08:47 Insider threat, screening, hiring, malicious actor, Manning, Snowden00:09:53 Snowden challenges legality of government surveillance00:15:00 Adversary gains access, steals information, demands ransom00:19:19 Different levels of readiness present challenges00:23:15 Helping clients & coalitions for cybersecurity policy00:24:58 Consistency in technology and cybersecurity under past presidents00:27:47 Cybersecurity is like warfare or terrorism00:32:30 AI tools and data drive persuasive information00:34:50 National Cybersecurity Awareness Month raises awareness on cybersecurity and encourages action to protect businesses00:42:40 Diversity of experiences leads to career growth00:44:01 Adaptive, willing, and able to learn
    Introduction to National Cybersecurity Awareness MonthPurpose of Raising Awareness About Cybersecurity

    Grant explained that one of the great things about National Cybersecurity Awareness Month is exactly raising awareness and providing an opportunity to hopefully spend time thinking about and discussing cybersecurity. He noted that for organizations already focused on cybersecurity daily, the awareness month may not raise their awareness much more. However, many organizations don't constantly think about cybersecurity, so for business leaders and executives who may now recognize the existential threat a cyber incident poses, the awareness month offers a chance to have important conversations they may have previously avoided due to lack of understanding.

    National Cybersecurity Awareness Month: "You're only one bad kind of cyber incident away from your organization not existing anymore."— Grant SchneiderOpportunities for Organizations to Have Conversations About Cybersecurity

    According to Grant, leaders who don't grasp cybersecurity risks may personally fear initiating conversations to ask what the organization needs to do to address risks. National Cybersecurity Awareness Month provides an opportunity for these leaders to have the necessary conversations and gain education. Grant said the awareness month is a chance to discuss basics, like implementing multifactor authentication, patching and updates. He observed that much of the content produced for the awareness month focuses on cybersecurity fundamentals, so it allows organizations to dedicate time to shoring up basic defenses. Overall, Grant emphasized National Cybersecurity Awareness Month facilitates essential cybersecurity conversations for organizations and leaders who otherwise may not prioritize it consistently.

    Evolution of Insider Threat in the Intelligence CommunityScreening Out Bad Actors During the Hiring Process

    Grant explains that in the early days of his career at the Defense Intelligence Agency (DIA), insider threat mitigation focused on screening out bad actors during the hiring process. The belief was that malicious insiders were either people with concerning backgrounds trying to get hired, or nation-state actors attempting to plant individuals within the intelligence community. The screening process aimed to identify and reject potentially problematic candidates.

    Nation-State Actors Planting Individuals Within the Community

    He mentions the possibility of nation-state actors attempting to plant malicious insiders in the intelligence community through the hiring process. This underscores the perceived risk that foreign governments would try to insert spies or saboteurs into the ranks of U.S. intelligence agencies.

    Shift Towards Insiders Becoming Whistleblowers

    Grant then discusses how over time, the nature of insider threats shifted more towards insiders becoming whistleblowers driven by ideology or moral objections. He cites the Manning and Snowden cases as examples of this shift. Rather than foreign plants, these were trusted insiders who went on to leak classified information out of claimed conscience.

    Importance of Not Making Negative Generalizations About Whistleblowers

    While describing this evolution, Grant is careful not to make generalizations condemning all whistleblowers. He maintains that whistleblowing serves an important function in society.

    Snowden’s Different View on the Community’s Work and His Actions

    In Snowden's case specifically, Grant characterizes his mindset as believing the intelligence community's lawful work was actually wrong. This led Snowden to take matters into his own hands by leaking classified materials.

    Importance of Diversity of Experiences for Personal and Professional GrowthActively Seeking Out Different Experiences Within Current Role

    Grant emphasized the importance of seeking diversity of experiences, even within one's current job. He advised not constantly changing jobs, as that may look unfavorable on a resume. However, within a role, one should actively volunteer for new projects and tasks that provide exposure to different skills. Being willing to say "yes" and take on unfamiliar work leads to becoming a more versatile, well-rounded employee.

    Saying Yes to New Opportunities

    Grant recommended that when presented with new opportunities at work, such as a manager asking for someone to work on a certain project, the best approach is to always say yes. Even if the work does not seem interesting or relevant, accepting the challenge provides a chance to learn new skills. Saying yes demonstrates eagerness to expand one's capabilities.

    The Importance of Diversity of Experiences: "Diversity of experiences, and whatever it is you're working on, when your boss, your coworkers say, hey, we're looking for someone to work on this, always say YES. I wanna go work on that as well."— Grant SchneiderBecoming a Well-Rounded Employee and Leader

    According to Grant, embracing diverse experiences allows professionals to build unique skill sets and make themselves stand out. Having broad exposure equips individuals to work effectively on varied teams and projects. It enables adaptability that makes one a more valuable contributor. Grant emphasized that diversity of experience helps shape well-rounded leaders who can thrive in any environment.

    View Work and Life as a Scavenger Hunt for Acquiring Skills

    Grant suggested viewing one's career progression as a scavenger hunt to collect talents and capabilities. Being strategic and purposeful about pursuing different opportunities maximizes growth. Grant urged professionals to reflect on the skills they want in their toolbox and then leverage jobs and other life experiences to intentionally develop expertise across multiple areas.

    The Consequences of Cyber Incidents and the Growth of CybersecurityIncreased Consequences of Cyber Incidents

    As Grant explained, when he first joined DIA, there were no connections to the unclassified internet in the building. Over time, every employee had both unclassified and classified computers to connect to various networks. As more devices were connected to networks, the potential consequences of a cyber incident grew. With more reliance on technology and interconnected systems, a cyberattack could cause major disruptions to operations. Grant noted that this increase in risk led to a greater focus on cybersecurity within both government and private sector organizations.

    The Consistency of Approach Towards Technology and Cybersecurity across Administrations: "In my opinion, technology and cybersecurity has not been very politicized. And really going back from Bush to Obama, to Trump and to Biden, in my opinion, we've seen a good bit of consistency around the directions, the people have been headed."— Grant SchneiderCreation of Dedicated Security Operations Centers

    Grant discussed how the growing risks from cyber incidents led to the creation of security operations centers focused on monitoring threats. Whereas IT operations teams had previously handled security, cybersecurity emerged as its own discipline requiring specialized skills and 24/7 vigilance. Organizations established dedicated security operations centers tasked with detecting and responding to security events around the clock. This represented a major shift as cybersecurity transitioned from a purely policy function to an operational capability within organizations.

    Cybersecurity as a Distinct Operational Entity in Public and Private Sectors

    Over the years, cybersecurity evolved from an information security policy role to a distinct operational entity, according to Grant. This transition occurred in both the public sector and private sector as the nature of threats changed. Cybersecurity is now recognized as requiring its own set of skills and continuous monitoring separate from traditional IT operations. Grant noted that this shift has continued with cybersecurity capabilities and staffing growing significantly across sectors.

    Understanding and Manipulating Information in CyberspaceIncreasing Availability of Data and AI Tools

    Grant discussed how there is more and more data available now as compared to the past. He also mentioned how AI tools allow people to analyze and understand this data in new ways. For example, AI can help determine what information or messages are most likely to resonate with someone based on what is already known about their views and preferences. Grant suggested that the combination of more data and better AI-enabled analysis means information can be tailored and targeted to individuals in new ways, for good or bad purposes.

    Delivering Messages That Resonate With Individuals, Regardless of Truth

    Building on the availability of data and AI tools, Grant noted how messages can now be crafted in a customized way for each person. He said that tools allow understanding of what is believable to each individual. Then messages can be created that align with existing beliefs and preferences, regardless of whether the messages are factually true. Grant gave the example that false information could potentially be spread this way if the content resonates with what someone already thinks.

    Society’s Acceptance of Divisive and Blunt Opinions

    Grant suggested that technology capabilities enabling tailored messaging are emerging alongside the increased societal acceptance of divisive, controversial and blunt opinions being shared publicly. He noted that norms seem to have changed from when there were more things people didn't express out loud. Grant proposed that this societal shift combined with technological capabilities that can take advantage of divisions creates risks in terms of information manipulation.

    About Our Guest

    Grant Schneider’s entire 30-year career has focused on our nation’s security. Grant spent more than 20 years at the Defense Intelligence Agency, seven of which he served as the CIO. He then spent six years in the Executive Office of the President during the Obama and Trump administrations, focused on all aspects of federal and critical infrastructure cybersecurity. During that time, he served as a Senior Director for Cybersecurity Policy on the National Security Council staff and most recently as the Federal CISO. For the past three years, Grant has served as Senior Director of Cybersecurity Services at Venable, helping companies from across all sectors enhance their cybersecurity programs through the development and implementation of risk management programs as well as assisting with the preparation, response, and recovery from various cyber incidents, including ransomware.

    Episode LinksColonial Pipeline hackLive Free or Die Hard Alliance for Digital Innovation