Afleveringen

  • Summary of https://www.sciencedirect.com/science/article/pii/S0167811625000114

    Presents a meta-analysis of two decades of studies examining consumer resistance to artificial intelligence (AI). The authors synthesize findings from hundreds of studies with over 76,000 participants, revealing that AI aversion is context-dependent and varies based on the AI's label, application domain, and perceived characteristics.

    Interestingly, the study finds that negative consumer responses have decreased over time, particularly for cognitive evaluations of AI. Furthermore, the meta-analysis indicates that research design choices influence observed AI resistance, with studies using more ecologically valid methods showing less aversion.

    Consumers exhibit an overall small but statistically significant aversion to AI (average Cohen’s d = -0.21). This means that, on average, people tend to respond more negatively to outputs or decisions labeled as coming from AI compared to those labeled as coming from humans.

    Consumer aversion to AI is strongly context-dependent, varying significantly by the AI label and the application domain. Embodied forms of AI, such as robots, elicit the most negative responses (d = -0.83) compared to AI assistants or mere algorithms. Furthermore, domains involving higher stakes and risks, like transportation and public safety, trigger more negative responses than domains focused on productivity and performance, such as business and management.

    Consumer responses to AI are not static and have evolved over time, generally becoming less negative, particularly for cognitive evaluations (e.g., performance or competence judgements). While initial excitement around generative AI in 2021 led to a near null-effect in cognitive evaluations, affective and behavioral responses still remain significantly negative overall.

    The characteristics ascribed to AI significantly influence consumer responses. Negative responses are stronger when AI is described as having high autonomy (d = -0.28), inferior performance (d = -0.53), lacking human-like cues (anthropomorphism) (d = -0.23), and not recognizing the user's uniqueness (d = -0.24). Conversely, limiting AI autonomy, highlighting superior performance, incorporating anthropomorphic cues, and emphasizing uniqueness recognition can alleviate AI aversion.

    The methodology used to study AI aversion impacts the findings. Studies with greater ecological validity, such as field studies, those using incentive-compatible designs, perceptually rich stimuli, clear explanations of AI, and behavioral (rather than self-report) measures, document significantly smaller aversion towards AI. This suggests that some documented resistance in purely hypothetical lab settings might be an overestimation of real-world aversion.

  • Summary of https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/

    This Center for Security and Emerging Technology issue brief examines how researchers evaluate explainability and interpretability in AI-enabled recommendation systems. The authors' literature review reveals inconsistencies in defining these terms and a primary focus on assessing system correctness (building systems right) over system effectiveness (building the right systems for users).

    They identified five common evaluation approaches used by researchers, noting a strong preference for case studies and comparative evaluations. Ultimately, the brief suggests that without clearer standards and expertise in evaluating AI safety, policies promoting explainable AI may fall short of their intended impact.

    Researchers do not clearly differentiate between explainability and interpretability when describing these concepts in the context of AI-enabled recommendation systems. The descriptions of these principles in research papers often use a combination of similar themes. This lack of consistent definition can lead to confusion and inconsistent application of these principles.The study identified five common evaluation approaches used by researchers for explainability claims: case studies, comparative evaluations, parameter tuning, surveys, and operational evaluations. These approaches can assess either system correctness (whether the system is built according to specifications) or system effectiveness (whether the system works as intended in the real world).Research papers show a strong preference for evaluations of system correctness over evaluations of system effectiveness. Case studies, comparative evaluations, and parameter tuning, which are primarily focused on testing system correctness, were the most common approaches. In contrast, surveys and operational evaluations, which aim to test system effectiveness, were less prevalent.Researchers adopt various descriptive approaches for explainability, which can be categorized into descriptions that rely on other principles (like transparency), focus on technical implementation, state the purpose as providing a rationale for recommendations, or articulate the intended outcomes of explainable systems.The findings suggest that policies for implementing or evaluating explainable AI may not be effective without clear standards and expert guidance. Policymakers are advised to invest in standards for AI safety evaluations and develop a workforce capable of assessing the efficacy of these evaluations in different contexts to ensure reported evaluations provide meaningful information.
  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • Summary of https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-502c-4139-8bf2-56eb4b65c58a.pdf

    Investigates the economic value of open source software (OSS) by estimating both the supply-side (creation cost) and the significantly larger demand-side (usage value). Utilizing unique global data on OSS usage by firms, the authors calculate the cost to recreate widely used OSS and the replacement value for firms if OSS did not exist.

    Their findings reveal a substantial multi-trillion dollar demand-side value, far exceeding the billions needed for recreation, highlighting OSS's critical, often unmeasured, role in the modern economy. The study also examines the concentration of value creation among a small percentage of developers and the distribution of OSS value across different programming languages and industries.

    This study estimates that the demand-side value of widely-used open source software (OSS) is significantly larger than its supply-side value. The researchers estimate the supply-side value (the cost to recreate the most widely used OSS once) to be $4.15 billion, while the demand-side value (the replacement value for each firm that uses the software and would need to build it internally if OSS did not exist) is estimated to be much larger at $8.8 trillion. This highlights the substantial economic benefit derived from the reuse of OSS by numerous firms.

    The research reveals substantial heterogeneity in the value of OSS across different programming languages. For example, in terms of demand-side value, Go is estimated to be more than four times the value of the next language, JavaScript, while Python has a considerably lower value among the top languages analyzed. This indicates that the economic impact of OSS is not evenly distributed across the programming language landscape.

    The study finds a high concentration in the creation of OSS value, with only a small fraction of developers contributing the vast majority of the value. Specifically, it's estimated that 96% of the demand-side value is created by only 5% of OSS developers. These top contributors also tend to contribute to a substantial number of repositories, suggesting their impact is broad across the OSS ecosystem.

    Measuring the value of OSS is inherently difficult due to its non-pecuniary (free) nature and the lack of centralized usage tracking. This study addresses this challenge by leveraging unique global data from two complementary sources: the Census II of Free and Open Source Software – Application Libraries and the BuiltWith dataset, which together capture OSS usage by millions of global firms. By focusing on widely-used OSS, the study aims to provide a more precise understanding of its value compared to studies that estimate the replacement cost of all existing OSS.

    The estimated demand-side value of OSS suggests that if it did not exist, firms would need to spend approximately 3.5 times more on software than they currently do. This underscores the massive cost savings and productivity enhancement that the existence of OSS provides to the economy. The study argues that recognizing this value is crucial for the future health of the digital economy and for informing policymakers about the importance of supporting the OSS ecosystem.

  • Summary of https://www.hoover.org/sites/default/files/research/docs/cgri-closer-look-110-ai.pdf

    Examines the potential impact of artificial intelligence on corporate boardrooms and governance. It argues that while AI's influence on areas like decision-making is acknowledged, its capacity to reshape the operations and practices of the board itself warrants greater attention.

    The authors explore how AI could alter board functions, information processing, interactions with management, and the role of advisors, while also considering the challenges of maintaining board-management boundaries and managing information access. Ultimately, the piece discusses how AI could transform various governance obligations and presents both the benefits and risks associated with its adoption in the boardroom.

    AI has the potential to significantly transform corporate governance by reshaping how boards function, process information, interact with management and advisors, and fulfill specific governance obligations. Boards are already aware of AI's potential, ranking its increased use across the organization as a top priority.

    AI can reduce the information asymmetry between the board and management by increasing the volume, type, and quality of information available to directors. This allows boards to be more proactive and less reliant on management-provided information, potentially leading to better oversight. AI tools can enable directors to search and synthesize public and private information more easily.

    The adoption of AI will significantly increase the expectations and responsibilities of board members. Directors will be expected to spend more time preparing for meetings by reviewing and analyzing a greater quantity of information. They will also be expected to ask higher-quality questions and provide deeper insights, leveraging AI tools for analysis and benchmarking.

    AI can enhance various governance functions, including strategy, compensation, human capital management, audit, legal matters, and board evaluations. For example, AI can facilitate richer scenario planning, provide real-time compensation benchmarking, identify skills gaps in human capital, detect potential fraud, monitor legal developments, and analyze board effectiveness. This may also lead to a supplementation or replacement of work currently done by paid advisors.

    The integration of AI into the boardroom also presents several risks and challenges, including maintaining the separation of board and management responsibilities, managing information access, ensuring data security, addressing the potential for errors and biases in AI models, and avoiding "analysis paralysis". Boards will need to develop new protocols and skills to effectively utilize AI while mitigating these risks.

  • Summary of https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5097445

    This working paper by De Freitas et al. investigates why people resist forming relationships with AI companions, despite their potential to alleviate loneliness. The authors reveal that while individuals acknowledge AI's superior availability and non-judgmental nature compared to humans, they do not consider AI relationships to be "true" due to a perceived lack of essential qualities like mutual caring and emotional understanding. Through several studies, the research demonstrates that this resistance stems from a belief that AI cannot truly understand or feel emotions, leading to the perception of one-sided relationships.

    Even direct interaction with AI companions only marginally increases acceptance by improving perceptions of superficial features, failing to alter deeply held beliefs about AI's inability to fulfill core relational values. Ultimately, the paper highlights significant psychological barriers hindering the widespread adoption of AI companions for social connection.

    People exhibit resistance to adopting AI companions despite acknowledging their superior capabilities in certain relationship-relevant aspects like availability and being non-judgmental. This resistance stems from the belief that AI companions are incapable of realizing the essential values of relationships, such as mutual caring and emotional understanding.This resistance is rooted in a dual character concept of relationships, where people differentiate between superficial features and essential values. Even if AI companions possess the superficial features (e.g., constant availability), they are perceived as lacking the essential values (e.g., mutual caring), leading to the judgment that relationships with them are not "true" relationships.The belief that AI companions cannot realize essential relationship values is linked to perceptions of AI's deficiencies in mental capabilities, specifically the ability to understand and feel emotions, which are seen as crucial for mutual caring and thus for a relationship to be considered mutual and "true". Physical intimacy was not found to be a significant mediator in this belief.Interacting with an AI companion can increase willingness to engage with it for friendship and romance, primarily by improving perceptions of its advertised, more superficial capabilities (like being non-judgmental and available). However, such interaction does not significantly alter the fundamental belief that AI is incapable of realizing the essential values of relationships. The mere belief that one is interacting with a human (even when it's an AI) enhances the effectiveness of the interaction in increasing acceptance.The strong, persistent belief about AI's inability to fulfill the essential values of relationships represents a significant psychological barrier to the widespread adoption of AI companions for reducing loneliness. This suggests that the potential loneliness-reducing benefits of AI companions may be difficult to achieve in practice unless these fundamental beliefs can be addressed. The resistance observed in the relationship domain, where values are considered essential, might be stronger than in task-based domains where performance is the primary concern.
  • Summary of https://cdn.prod.website-files.com/65af2088cac9fb1fb621091f/67aaca031ed677c879434284_Final_US%20Open-Source%20AI%20Governance.pdf

    This document from the Center for AI Policy and Yale Digital Ethics Center examines the contentious debate surrounding the governance of open-source artificial intelligence in the United States. It highlights the tension between the ideological values promoting open access and geopolitical considerations, particularly competition with China.

    The authors analyze various policy proposals for open-source AI, creating a rubric that combines ideological factors like transparency and innovation with geopolitical risks such as misuse and global power dynamics. Ultimately, the paper suggests targeted policy interventions over broad restrictions to balance the benefits of open-source AI with national security concerns, emphasizing ongoing monitoring of technological advancements and geopolitical landscapes.

    The debate surrounding open-source AI regulation involves a tension between ideological values (innovation, transparency, power distribution) and geopolitical considerations, particularly US-China competition (Chinese misuse, backdoor risks, global power dynamics). Policymakers are grappling with how to reconcile these two perspectives, especially in light of advancements in Chinese open-source AI.

    Heavy-handed regulation like blanket export controls on all open-source AI models is likely sub-optimal and counterproductive. Such controls would significantly disrupt the development of specific-use applications, have limited efficacy against Chinese misuse, and could undermine US global power by discouraging international use of American technology.

    More targeted interventions are suggested as preferable to broad restrictions. The paper analyzes policies such as industry-led risk assessments for model release and government funding for an open-source repository of security audits. These approaches aim to balance the benefits of open-source AI with the need to address specific security risks more effectively and with less disruption to innovation.

    The nature of open-source AI, being globally accessible information, makes it inherently difficult to decouple the US and Chinese ecosystems. Attempts to do so through export controls may have unintended consequences and could be circumvented due to the ease of information transfer.

    Further research and monitoring are crucial to inform future policy decisions. Key areas for ongoing attention include tracking the performance gap between open and closed models, understanding the origins of algorithmic innovations, developing objective benchmarks for comparing models from different countries, and advancing technical safety mitigations for open models.

  • Summary of https://arxiv.org/pdf/2503.05628

    This expert strategy document from Dan Hendrycks, Eric Schmidt and Alexander Wang addresses the national security implications of rapidly advancing AI, particularly the anticipated emergence of superintelligence.

    The authors propose a three-pronged framework drawing parallels with Cold War strategies: deterrence through the concept of Mutual Assured AI Malfunction (MAIM), nonproliferation to restrict access for rogue actors, and competitiveness to bolster national strength.

    The text examines threats from rival states, terrorists, and uncontrolled AI, arguing for proactive measures like cyber espionage and sabotage for deterrence, export controls and information security for nonproliferation, and domestic AI chip manufacturing and legal frameworks for competitiveness. Ultimately, the document advocates for a risk-conscious, multipolar strategy to navigate the transformative and potentially perilous landscape of advanced artificial intelligence.

    Rapid advances in AI, especially the anticipation of superintelligence, present significant national security challenges akin to those posed by nuclear weapons. The dual-use nature of AI means it can be leveraged for both economic and military dominance by states, while also enabling rogue actors to develop bioweapons and launch cyberattacks. The potential for loss of control over advanced AI systems further amplifies these risks.

    The concept of Mutual Assured AI Malfunction (MAIM) is introduced as a likely default deterrence regime. This is similar to nuclear Mutual Assured Destruction (MAD), where any aggressive pursuit of unilateral AI dominance by a state would likely be met with preventive sabotage by its rivals, ranging from cyberattacks to potential kinetic strikes on AI infrastructure.

    A critical component of a superintelligence strategy is nonproliferation. Drawing from precedents in restricting weapons of mass destruction, this involves three key levers: compute security to track and control the distribution of high-end AI chips, information security to protect sensitive AI research and model weights from falling into the wrong hands, and AI security to implement safeguards that prevent the malicious use and loss of control of AI systems.

    Beyond mitigating risks, states must also focus on competitiveness in the age of AI to ensure their national strength. This includes strategically integrating AI into military command and control and securing drone supply chains, guaranteeing access to AI chips through domestic manufacturing and strategic export controls, establishing legal frameworks to govern AI agents, and maintaining political stability in the face of rapid automation and the spread of misinformation.

    Existing strategies for dealing with advanced AI, such as a completely hands-off approach, voluntary moratoria, or a unilateral pursuit of a strategic monopoly, are flawed and insufficient to address the multifaceted risks and opportunities presented by AI. The authors propose a multipolar strategy based on the interconnected pillars of deterrence (MAIM), nonproliferation, and competitiveness, drawing lessons from the Cold War framework adapted to the unique challenges of superintelligence.

  • Summary of https://www.sciencedirect.com/science/article/pii/S2666920X24001516

    This paper examines how higher education institutions globally are addressing the integration of generative AI by analyzing the adoption policies of 40 universities across six regions through the lens of the Diffusion of Innovations Theory.

    The study identifies key themes related to compatibility, trialability, and observability of AI, the communication channels being used, and the defined roles and responsibilities for faculty, students, and administrators.

    Findings reveal a widespread emphasis on academic integrity and enhancing learning, but also highlight gaps in comprehensive policies and equitable access, offering insights for policymakers to develop inclusive AI integration strategies.

    Universities globally are proactively addressing the integration of generative AI (GAI) in higher education, primarily focusing on academic integrity, enhancing teaching and learning, and promoting AI literacy. This is evidenced by the emphasis on these themes in the analysis of policies across 40 universities from six global regions. The study highlights that institutions recognize the transformative potential of GAI while also being concerned about its ethical implications and impact on traditional educational values.The study, utilizing the Diffusion of Innovations Theory (DIT), reveals that while universities are exploring GAI's compatibility, trialability, and observability, significant gaps exist in comprehensive policy frameworks, particularly concerning data privacy and equitable access. The research specifically investigated these innovation characteristics in university policies. Although many universities address academic integrity and the potential for enhancing education (compatibility), and are encouraging experimentation (trialability), fewer have robust strategies for evaluating GAI's impact (observability) and clear guidelines for data privacy and equal access.Communication about GAI adoption is varied, with digital platforms being the most common channel, but less than half of the studied universities demonstrate a comprehensive approach to disseminating information and fostering dialogue among stakeholders. The analysis identified five main communication channels: digital platforms, interactive learning and engagement channels, direct and personalized communication channels, collaborative and social networks, and advisory, monitoring, and feedback channels. The finding that not all universities actively use a range of these channels suggests a need for more focused efforts in this area.Higher education institutions are establishing clear roles and responsibilities for faculty, students, and administrators in the context of GAI adoption. Faculty are largely tasked with integrating GAI into curricula and ensuring ethical use, students are responsible for ethical use and maintaining academic integrity, and administrators are primarily involved in policy development, implementation, and providing support. This highlights a structured approach to managing the integration of GAI within the educational ecosystem.Cultural backgrounds may influence the emphasis of GAI adoption policies, with institutions in North America and Europe often prioritizing innovation and critical thinking, while those in Asia emphasize ethical use and compliance, and universities in Africa and Latin America focus on equity and accessibility.This regional variation suggests that while there are common values, the specific challenges and priorities related to GAI adoption can differ based on cultural and socio-economic contexts.
  • Summary of https://unesdoc.unesco.org/ark:/48223/pf0000391105

    This UNESCO publication presents a global framework for AI competency in students. Recognizing the increasing role of AI, it argues for proactive education to prepare responsible users and co-creators.

    The framework outlines twelve competencies across four dimensions: human-centered mindset, ethics of AI, AI techniques and applications, and AI system design, each with three progression levels. It aims to guide educators in integrating AI learning objectives into curricula, emphasizing critical judgment, ethical awareness, foundational knowledge, and inclusive design.

    The document also discusses implementation strategies, teacher professionalization, pedagogical approaches, and competency-based assessments for AI education.

    The UNESCO AI competency framework for students aims to equip students with the values, knowledge, and skills necessary to thrive in the AI era, becoming responsible and creative citizens. It is the first global framework of its kind, intended to support the development of core competencies for students to critically examine and understand AI from holistic perspectives, including ethical, social, and technical dimensions.

    The framework is structured around 12 competencies spanning four dimensions: Human-centred mindset, Ethics of AI, AI techniques and applications, and AI system design, across three progression levels: Understand, Apply, and Create. This structure is designed to provide a spiral learning sequence across grade levels, helping students progressively build a systematic and transferable understanding of AI competencies.

    The framework is grounded in key principles that include fostering a critical approach to AI, prioritizing human-centred interaction with AI, encouraging environmentally sustainable AI, promoting inclusivity in AI competency development, and building core AI competencies for lifelong learning. It embodies UNESCO's mandate by anchoring its vision of AI and education in principles of human rights, inclusion, and equity.

    The primary target audience for the AI CFS includes policy-makers, curriculum developers, providers of education programmes on AI for students, school leaders, teachers, and educational experts. The framework is intended to serve as a guide for public education systems to build the competencies required for the effective implementation of national AI strategies and the creation of inclusive, just, and sustainable futures. It is designed as a global reference that needs to be tailored to the diverse readiness levels of local education systems.

    The framework envisions students as active co-creators of AI and responsible citizens. It emphasizes the importance of critical judgment of AI solutions, awareness of citizenship responsibilities in the era of AI, foundational AI knowledge for lifelong learning, and inclusive, sustainable AI design. Ultimately, the AI CFS aims to prepare students to not only use AI effectively and ethically but also to contribute to shaping its future development and relationship with society.

  • Summary of https://media.licdn.com/dms/document/media/v2/D561FAQHEys4iGQj7CA/feedshare-document-pdf-analyzed/B56ZUN7jLFHQAY-/0/1739695481660?e=1743033600&v=beta&t=nLUoVEs06lwzFgHpx8DbIfd6nMyvXem1ZrpqPSChhiA

    "Agentic AI – the new frontier in GenAI," explores the transformative potential of agentic artificial intelligence, particularly within the realm of generative AI. It highlights how autonomous AI systems, capable of making decisions and acting with limited human input, are evolving through machine learning and multimodal data processing to automate complex tasks and optimize workflows.

    The text emphasizes the strategic imperative for organizations to adopt this technology early to gain competitive advantages, improve efficiency, enhance customer experiences, and drive revenue growth, providing numerous real-world examples across various industries and business functions.

    It also discusses key considerations for implementing agentic AI, including strategic planning, technological infrastructure, data readiness, talent acquisition, and ethical implications, alongside a comparison of commercial and open-source tools.

    Ultimately, the document positions agentic AI as a crucial element for future business success, requiring a strategic vision and commitment to realize its full potential in an increasingly AI-driven world.

    Agentic AI, with its advanced human-like reasoning and interaction capabilities, is transforming various sectors including manufacturing, healthcare, finance, retail, transportation, and energy.Organisations' AI strategies should leverage multimodal GenAI capabilities while ensuring ethical AI safeguards to drive autonomous process re-engineering and enhanced decision-making across all business areas.When integrated effectively, agentic AI can enhance efficiency, lower costs, improve customer experience, and drive revenue growth.Agentic AI systems possess the capacity to make autonomous decisions and take actions to achieve specific goals with limited or no direct human intervention, exhibiting key aspects like autonomy, goal-oriented behaviour, environment interaction, learning capability, workflow optimisation, and multi-agent and system conversation.The evolution of agentic AI has progressed through the integration of machine learning for data learning and NLP-enabled user interactions, the introduction of multimodality combining various data types for enhanced interactions, and the development of advanced autonomy and real-time interactions enabling human-like reasoning and independent decision-making.
  • Summary of https://www.hbs.edu/ris/Publication%20Files/25-023_8ee1f38f-d949-4b49-80c8-c7a736f2c27b.pdf

    Examines the gender gap in the adoption and usage of generative AI tools across the globe.Synthesizing data from 18 studies involving over 140,000 individuals, the authors reveal a consistent pattern: women are less likely than men to use generative AI.

    This gap persists even when access to these technologies is equalized, suggesting deeper underlying causes. Analysis of internet traffic data and mobile app downloads further supports these findings, indicating a skewed gender distribution among users of popular AI platforms.

    The research explores potential mechanisms behind this disparity, such as differences in knowledge, confidence, and perceptions of AI's ethical implications. The authors caution that this gender gap could lead to biased AI systems and exacerbate existing inequalities, emphasizing the need for targeted interventions.

    The most prominent explanations behind the gender gap in generative AI adoption are:

    Lower familiarity and knowledge Women consistently report less familiarity with generative AI tools. They are also more likely to report not knowing how to use AI tools.Lower confidence and persistence Women show less confidence in their ability to use AI tools effectively. They are also less persistent when using generative AI, being less likely to attempt prompting multiple times for desired results.Perception of unethical use Women are more likely to perceive the use of AI in coursework or assignments as unethical or as cheating.Mixed perceptions of benefits Studies show mixed results regarding whether men and women equally perceive the benefits and usefulness of generative AI. Some studies indicate women perceive lower productivity benefits and are less likely to see generative AI as useful in job searches or educational settings.No significant differences in trust or risk perception The study indicates that gender differences in generative AI adoption are likely driven by disparities in knowledge, familiarity, and confidence, rather than differences in trust or risk perceptions. There are no statistically significant differences in men and women trusting the accuracy of generative AI, or in expressing concerns about risks such as data breaches or job redundancy.
  • Summary of https://re-ai.berkeley.edu/sites/default/files/responsible_use_of_generative_ai_uc_berkeley_2025.pdf

    A playbook for product managers and business leaders seeking to responsibly use generative AI (genAI) in their work and products. It emphasizes proactively addressing risks like data privacy, inaccuracy, and bias to build trust and maintain accountability.

    The playbook outlines ten actionable plays for organizational leaders and product managers to integrate responsible AI practices, improve transparency, and mitigate potential harms. It underscores the business benefits of responsible AI, including enhanced brand reputation and regulatory compliance.

    Ultimately, the playbook aims to help organizations and individuals capitalize on genAI's potential while ensuring its ethical and sustainable implementation.

    GenAI has diverse applications and is used for automating work, generating content, transcribing voice, and powering new products and features.Organizations can use different genAI models. These include off-the-shelf tools, enterprise solutions, or open models, which can be customized for specific needs and products.Adoption of genAI can lead to increased productivity and efficiency. Organizations that address the risks associated with genAI are best positioned to capitalize on the benefits. Responsible AI practices can foster a positive brand image and customer loyalty.There are key risks product managers need to consider when using genAI, especially regarding data privacy, transparency, inaccuracy, bias, safety, and security.There are several challenges to using genAI responsibly, including a lack of organizational policies and individual education, the immaturity of the industry, and the replication of inequitable patterns that exist in society.
  • Summary of https://assets.ctfassets.net/2pudprfttvy6/5hucYCFs2oKtLHEqGGweZa/cf02ebfc138e4a3f7e54f78d36fc1eef/Job-Skills-Report-2025.pdf

    The Coursera Job Skills Report 2025 analyzes the fastest-growing skills for employees, students, and job seekers, highlighting the impact of generative AI. The report draws from data of over five million enterprise learners across thousands of institutions.

    Key findings emphasize the surging demand for AI skills like GenAI, computer vision, and machine learning, alongside crucial skills in cybersecurity, data ethics, and risk management. These trends reflect the need for individuals and organizations to adapt to technological advancements and evolving job market demands.

    The report also identifies regional differences in skill priorities and provides recommendations for businesses, educational institutions, governments, and learners to foster workforce readiness. Overall, the report underscores the importance of continuous upskilling and reskilling in areas like AI, data, and cybersecurity to thrive in the future of work.

    GenAI skills are in high demand and are rapidly growing across all enterprise learners. Course enrollments in GenAI have surged, with a significant portion of learners coming from India, Colombia, and Mexico. This growth highlights the increasing need for individuals to develop AI capabilities to succeed in the workplace.Cybersecurity and risk management skills are crucial due to the increase in cyberattacks. As cyberattacks become more frequent and sophisticated, there is a growing demand for professionals who can identify, assess, and mitigate risks.Data ethics and data governance are growing priorities, especially among employees and students. There's an increasing emphasis on responsibly managing and analyzing customer data, driven by the need to ensure "safe and secure" AI use.Students are focusing on sustainability skills such as waste minimization, business continuity planning, and disaster recovery. This focus aligns with the growing demand for green jobs and reflects concerns about the effects of climate change.Upskilling and reskilling initiatives are vital for workforce readiness. Businesses, higher education institutions, and governments must work together to equip individuals with essential skills in AI, cybersecurity, and data literacy. These initiatives are crucial for improving employability, productivity, and overall competitiveness in a rapidly evolving job market.
  • Summary of https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-critical-role-of-strategic-workforce-planning-in-the-age-of-ai

    McKinsey emphasizes the growing importance of strategic workforce planning (SWP) in the age of rapidly evolving technology, particularly generative AI. It highlights how forward-thinking companies are treating talent management with the same importance as financial capital, using SWP to anticipate future needs and proactively manage their workforce.

    The article outlines five best practices, including prioritizing talent investments, considering both capacity and capabilities, planning for multiple scenarios, filling talent gaps innovatively, and embedding SWP into business operations. By adopting these practices, organizations can improve their agility, ensure they have the right people with the right skills, and gain a competitive advantage in a dynamic market.

    The authors stress that SWP is crucial for navigating technological changes and ensuring long-term resilience. Ultimately, SWP allows for data-driven talent decisions, resource allocation, and a shift away from reactive hiring practices.

    The five best practices for companies preparing for disruptions from technological changes such as generative AI through strategic workforce planning (SWP) are:

    Prioritizing talent investments as much as financial investments. Successful organizations understand that their workforce is a strategic asset, and investing in talent development and retention is essential for long-term health. Employees represent both an organization’s largest investment and its deepest source of value.Considering both capacity and capabilities. Organizations can identify the specific skills and competencies required for critical roles that drive higher performance and create more value.Planning for multiple business scenarios. By implementing a scenario-based approach, organizations create flexibility for rapidly changing industry conditions.Taking an innovative approach to filling talent gaps. Weigh the time and cost implications of internal versus external hires, considering internal redeployments, reskilling or upskilling existing talent, acquisitions, and outsourcing.Embedding SWP into business as usual. Strategic workforce planning should become a business-as-usual process, not just a one-off exercise. By embedding SWP into core business operations, companies can better anticipate workforce needs, respond to changing demands, and ensure long-term agility and resilience.
  • Summary of https://openpraxis.org/articles/777/files/6749b446d17e9.pdf

    This document presents a collaboratively written manifesto offering a critical examination of the integration of Generative AI (GenAI) in higher education. It identifies both the positive and negative aspects of GenAI's influence on teaching and learning, stressing that it is not a neutral tool and risks reinforcing existing biases.

    The manifesto calls for research-backed decision-making to ensure GenAI enhances human agency and promotes ethical responsibility in education. It also acknowledges that while GenAI has potential, educators must also think about the deprofessionalization of the education field if AI tools increasingly automate tasks like grading, tutoring, and content delivery, potentially leading to job displacement and reduced opportunities for educators.

    The text explores the importance of AI literacy for users and also looks to the risks of human-AI symbiosis, including the erosion of human judgement, autonomy and creative agency. The authors hope to encourage debate and offer insight into the future of GenAI in educational contexts.

    Here are the five main takeaways:

    GenAI is not a neutral tool. It reflects worldviews and can reinforce biases, potentially marginalizing diverse voices.GenAI can both enhance and diminish essential human elements in education. While it offers potential for personalized learning and efficiency, it also risks eroding creativity, critical thinking, and empathy.Ethical considerations are paramount. Issues such as bias, fairness, transparency, and data security must be addressed to ensure responsible deployment of GenAI.Educators, administrators, and policymakers need to rethink education. Continuing with 'business as usual' is not an option. A shift is needed to emphasize learning processes and adapt assessment methods.Robust, evidence-based research is crucial. Decisions about integrating GenAI in education should be guided by a deep understanding of its impacts.
  • Summary of https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/

    Microsoft's "The AI Decision Brief" explores the transformative power of generative AI across industries. It offers guidance on navigating the AI platform shift, emphasizing strategies for effective implementation and maximizing opportunities while mitigating risks.

    The brief outlines stages of AI readiness, key drivers of value, and examples of successful AI adoption. It addresses challenges such as skill shortages, security concerns, and regulatory compliance, providing insights from industry leaders and customer stories.

    Furthermore, it emphasizes building trustworthy AI through security, privacy, and safety measures, underscoring Microsoft's commitment to supporting customers in their AI transformation journey. The document concludes by highlighting the future potential of AI in sustainability and various sectors, emphasizing the importance of collaboration and continuous learning in the age of AI.

    Here are five key takeaways:

    Generative AI is rapidly transforming industries, presenting opportunities for unprecedented impact and growth for leaders who embrace its potential. Its adoption rate is historically fast, with usage among enterprises jumping from 55% in 2023 to 75% in 2024.AI is becoming more accessible, and Microsoft is committed to providing broad technology access to empower organizations and individuals worldwide to develop and use AI in ways that serve the public good.Organizations progress through five stages of AI readiness: exploring, planning, implementing, scaling, and realizing, each with its own strategic priorities. Identifying the correct stage and implementing appropriate strategies is critical for managing generative AI transformation.Trust is crucial for AI innovation, and organizations should prioritize responsible AI practices and security. Trustworthy AI comprises three pillars: security, privacy, and safety.AI leaders are seeing greater returns and accelerated innovation, averaging a 370% ROI, with top leaders achieving a 1000% ROI. The highest-performing organizations realize almost four times the value from their AI investments compared to those just getting started.
  • Summary of https://www.sciencedirect.com/science/article/pii/S030859612500014X

    Argues that the current approach to governing "AI" is misguided. It posits that what we call "AI" is not a singular, novel technology, but rather a diverse set of machine-learning applications that have evolved within a broader digital ecosystem over decades.

    The author introduces a framework centered on the digital ecosystem, composed of computing devices, networks, data, and software, to analyze AI's governance. Instead of attempting to regulate "AI" generically, the author suggests focusing on specific problems arising from individual machine learning applications.

    The author critiques several proposed AI governance strategies, including moratoria, compute control, and cloud regulation, revealing that most of these proposed strategies are really about controlling all components of the digital ecosystem, and not AI specifically. By shifting the focus to specific applications and their impacts, the paper advocates for more decentralized and effective policy solutions.

    Here are five important takeaways:

    What is referred to as "artificial intelligence" is a diverse set of machine learning applications that rely on a digital ecosystem, not a single technology."AI governance" can be practically meaningless because of the numerous, diverse, and embedded applications of machine learning in networked computing.The digital ecosystem is composed of computing devices, networks, data, and software.Many policy concerns now attributed to "AI" were anticipated by policy conflicts associated with the rise of the Internet.Attempts to regulate "AI" as a general capability may require systemic control of digital ecosystem components and can be unrealistic, disproportionate, or dangerously authoritarian.
  • Summary of https://arxiv.org/pdf/2402.01659

    This paper examines how higher education institutions (HEIs) are responding to the rise of generative AI (GenAI) like ChatGPT. Researchers analyzed policies and guidelines from 116 US universities to understand the advice given to faculty and stakeholders.

    The study found that most universities encourage GenAI use, particularly for writing-related activities, and offer guidance for classroom integration. However, the authors caution that this widespread endorsement may create burdens for faculty and overlook long-term pedagogical implications and ethical concerns.

    The research explores the range of institutional approaches, from embracing to discouraging GenAI, and highlights considerations related to privacy, diversity, equity, and STEM fields. Ultimately, the findings suggest that HEIs are grappling with how to navigate the integration of GenAI into education, often with a focus on revising teaching methods and managing potential risks.

    Here are five important takeaways:

    Institutional embrace of GenAI: A significant number of higher education institutions (HEIs) are embracing GenAI, with 63% encouraging its use. Many universities provide detailed guidance for classroom integration, including sample syllabi (56%) and curriculum activities (50%). This indicates a shift towards accepting and integrating GenAI into the educational landscape.

    Focus on writing-related activities: A notable portion of GenAI guidance focuses on writing-related activities, while STEM-related activities, including coding, are mentioned less frequently and often vaguely (50%). This suggests an emphasis on GenAI's role in enhancing writing skills and a potential gap in exploring its applications in other disciplines.

    Ethical and privacy considerations: Over half of the institutions address the ethics of GenAI, including diversity, equity, and inclusion (DEI) (52%), as well as privacy concerns (57%). Common privacy advice includes exercising caution when sharing personal or sensitive data with GenAI. Discussions with students about the ethics of using GenAI in the classroom are also encouraged (53%).

    Rethinking pedagogy and increased workload: Both encouraging and discouraging GenAI use implies a rethinking of classroom strategies and increased workload for instructors and students. Institutions are providing guidance on flipping classrooms and rethinking teaching/evaluation strategies.

    Concerns about long-term impact and normalization: There are concerns regarding the long-term impact on intellectual growth and pedagogy. Normalizing GenAI use may make its presence indiscernible, posing ethical challenges and potentially discouraging intellectual development. Institutions may also be confusing acknowledging GenAI with experimenting with it in the classroom.

  • Summary of https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey

    The Digital Education Council's Global AI Faculty Survey 2025 explores faculty perspectives on AI in higher education. The survey, gathering insights from 1,681 faculty members across 28 countries, investigates AI usage, its impact on teaching and learning, and institutional support for AI integration.

    Key findings reveal that a majority of faculty have used AI in teaching, mainly for creating materials, but many have concerns about student over-reliance and evaluation skills. Furthermore, faculty express a need for clearer guidelines, improved AI literacy resources, and training from their institutions.

    The report also highlights the need for redesigning student assessments to address AI's impact. The survey data is intended to inform higher education leaders in their AI integration efforts and complements the DEC's Global AI Student Survey.

    Here are the five most important takeaways:

    Faculty have largely adopted AI in teaching, but use it sparingly. 61% of faculty report they have used AI in teaching. However, a significant majority of these faculty members indicate they use AI sparingly.Many faculty express concerns regarding students' AI literacy and potential over-reliance on AI. 83% of faculty are concerned about students' ability to critically evaluate AI output, and 82% worry that students may become too reliant on AI.Most faculty feel that institutions need to provide more AI guidance. 80% of faculty feel that their institution's AI guidelines are not comprehensive. A similar percentage of faculty feel there is a lack of clarity on how AI can be applied in teaching within their institutions.A significant number of faculty are calling for changes to student assessment methods. 54% of faculty believe that current student evaluation methods require significant changes. Half of faculty members believe that current assignments need to be redesigned to be more AI resistant.The majority of faculty are positive about using AI in teaching in the future. 86% of faculty see themselves using AI in their teaching practices in the future. Two-thirds of faculty agree that incorporating AI into teaching is necessary to prepare students for future job markets.
  • Summary of https://storage.googleapis.com/coscientist_paper/ai_coscientist.pdf

    Introduces an AI co-scientist system designed to assist researchers in accelerating scientific discovery, particularly in biomedicine. The system employs a multi-agent architecture, using large language models to generate novel research hypotheses and experimental protocols based on user-defined research goals.

    The AI co-scientist leverages web search and other tools to refine its proposals and provides reasoning for its recommendations. It is intended to collaborate with scientists, augmenting their hypothesis generation rather than replacing them.

    The system's effectiveness is validated through expert evaluations and wet-lab experiments in drug repurposing, target discovery, and antimicrobial resistance. Furthermore, the co-scientist architecture is model agnostic and is likely to benefit from further advancements in frontier and reasoning LLMs. The paper also addresses safety and ethical considerations associated with such an AI system.

    The AI co-scientist is a multi-agent system designed to assist scientists in making novel discoveries, generating hypotheses, and planning experiments, with a focus on biomedicine. Here are five key takeaways about the AI co-scientist:

    Multi-Agent Architecture: The AI co-scientist utilizes a multi-agent system built on Gemini 2.0, featuring specialized agents (Generation, Reflection, Ranking, Evolution, Proximity, and Meta-review) that work together to generate, debate, and evolve research hypotheses. The Supervisor agent orchestrates these agents, assigning them tasks and managing the flow of information. This architecture facilitates a "generate, debate, evolve" approach, mirroring the scientific method.Iterative Improvement: The system employs a tournament framework where different research proposals are evaluated and ranked, enabling iterative improvements. The Ranking agent uses an Elo-based tournament to assess and prioritize hypotheses through pairwise comparisons and simulated scientific debates. The Evolution agent refines top-ranked hypotheses by synthesizing ideas, using analogies, and simplifying concepts. The Meta-review agent synthesizes insights from all reviews to optimize the performance of other agents.Integration of Tools and Data: The AI co-scientist leverages various tools, including web search, domain-specific databases, and AI models like AlphaFold, to generate and refine hypotheses. It can also index and search private repositories of publications specified by scientists. The system is designed to align with scientist-provided research goals, preferences, and constraints, ensuring that the generated outputs are relevant and plausible.Validation through Experimentation: The AI co-scientist's capabilities have been validated in three biomedical areas: drug repurposing, novel target discovery, and explaining mechanisms of bacterial evolution and antimicrobial resistance. In drug repurposing, the system proposed candidates for acute myeloid leukemia (AML) that showed tumor inhibition in vitro. For novel target discovery, it suggested new epigenetic targets for liver fibrosis, validated by anti-fibrotic activity in human hepatic organoids. In explaining bacterial evolution, the AI co-scientist independently recapitulated unpublished experimental results regarding a novel gene transfer mechanism.Expert-in-the-Loop Interaction: Scientists can interact with the AI co-scientist through a natural language interface to specify research goals, incorporate constraints, provide feedback, and suggest new directions. The system can incorporate reviews from expert scientists to guide ranking and system improvements. The AI co-scientist can also be directed to follow up on specific research directions and prioritize the synthesis of relevant research.