Afleveringen

  • As increasingly sophisticated video and audio recording devices become available to householders at only moderate cost, deployment of such surveillance tech by householders is becoming ubiquitous. However, those deploying these devices do not always consider the impact of their surveillance tech on neighbouring properties or the legal ramifications of that impact. This episode explores this theme, and considers the causes of action and practical steps available to a neighbour adversely affected by overly intrusive surveillance tech.

    References:

    For a couple of illustrative cases that have reached the Courts, see Fairhurst v Woodard [2021] 10 WLUK 151 and Woolley & Woolley v Akbar or Akram [2017] SC EDIN 7.ICO guidance is available for people being filmed and those using domestic CCTV. Some discussion of the 'household exemption' is found in František Ryneš v Úřad pro ochranu osobních údajů [2015] 1 W.L.R. 2607.Those in neighbour disputes should consider these steps, including using a mediation service.
  • As of now, the UK has not enacted online harms legislation, and social media platforms in general are under no statutory duty to protect children from harmful content. However, providers of video-sharing platforms do have statutory obligations in that regard, set out in Part 4B of the Communications Act 2003 (added to the Act by amendment in 2020). Amongst other things, section 368Z1 of the Act requires providers of such platforms to make appropriate measures to protect under-18s from videos and audio-visual commercial communications containing "restricted material". Regardless of the statutory obligations (or lack thereof in the case of non-video social media platforms), many platforms expend considerable efforts seeking to protect children from harm.

    In this episode, we consider how a video-sharing start-up might focus its resources in order to comply with its statutory obligations and to maximise the prospects that it offers a safe environment for children. We are joined in this endeavour by Dr Elena Martellozzo, an Associate Professor in Criminology at the centre for Child Abuse and Trauma Studies (CATS) at Middlesex University. Elena has extensive experience of applied research within the Criminal Justice arena. Elena’s research includes exploring children and young people’s online behaviour, the analysis of sexual grooming and online harm and police practice in the area of child sexual abuse. Elena has emerged as a leading researcher and global voice in the field of child protection, victimology, policing and cybercrime. She is a prolific writer and has participated in highly sensitive research with the Police, the IWF, the NSPCC, the OCC, the Home Office and other government departments. Elena has also acted as an advisor on child online protection to governments and practitioners in Italy (since 2004) and Bahrain (2016) to develop a national child internet safety policy framework.

    Further reading:

    Part 4B of the Communications Act 2003 can be found here: https://www.legislation.gov.uk/ukpga/2003/21/part/4BA description of the Internet Watch Foundation technology suite can be found here: https://www.iwf.org.uk/our-technology/A series of recommendations for various stakeholders (including tech companies) in relation to protection of children online in the age of COVID is made in the Glitch report.An article by Dr Martellozzo and her team on the effect of harmful content on children can be found on Sage Open here.Dr Martellozzo explains the grooming process in Chapter 4 of Bryce, Robinson and Petherick, Child Abuse and Neglect: here: Forensic Issues in Evidence, Impact and Management, Academic Press, 2019.In the LSE-hosted blogpost Speaking Up: Contributing to the fight against gender-based online violence, Dr Martellozzo, Paula Bradbury and Emma Short provide commentary and references on this issue.
  • Zijn er afleveringen die ontbreken?

    Klik hier om de feed te vernieuwen.

  • This end-of-year episode explores the viability of delivery of Christmas gifts by drone in UK airspace. Someone has ambitious plans involving the precision drop of parcels down chimneys. We discuss the legal risks that arise and the hurdles that will have to be jumped if the Civil Aviation Authority is to authorise that plan.

    Further reading:

    The primary guidance document for those wishing to operate unmanned aircraft systems within the UK is CAP722. It sets out the relevant law and provides substantial amounts of operational material and guidance.The Civil Aviation Authority’s ‘Drone and Model Aircraft Code’ can be found here.The two legislative sources referred to in the podcast are the Air Navigation Order 2016 and the Civil Aviation Act 1982.
  • The long-anticipated Supreme Court decision in Lloyd v Google [2021] UKSC 50 was handed down on 10 November 2021. Reversing the decision of the Court of Appeal and reinstating the first instance decision of Warby J, the Supreme Court held that Richard Lloyd could not pursue a damages claim as representative of the class of individuals affected by Google's alleged breach of the Data Protection Act 1998 in relation to the so-called "safari workaround". The reasoning is involved, and the Judgment bears reading in full. In essence, however, the court held that establishing a right to damages for breach of the Data Protection Act 1998, and quantifying those damages, involved a claimant-by-claimant analysis that, in each case, must identify the breach affecting that claimant, the loss suffered by that claimant, and the causal connection between breach and loss. The claims were accordingly unsuitable in principle for a representative action. The Judgment also addressed in some detail the nature of damages for breach of data protection legislation, and the nature and scope of representative actions under CPR 19.6.

    In this episode we explore some of the ramifications of the decision through a scenario involving a data breach at an online marketplace.

    The Judgment may be found here, and a press summary here.

  • Non-fungible tokens (or 'NFTs') are a blockchain-based mechanism for uniquely identifying digital assets, and verifying both authenticity and ownership. An increasingly popular use case for NFTs (albeit it is only one of several use cases) involves the creation and sale of digital art. Notwithstanding that the NFT marketplace for digital art is dynamic and growing (with some NFTs selling at auction for vast sums), the legal basis of NFTs and, critically, the nature of what a purchaser actually acquires when purchasing an NFT artwork, are not universally understood. We explore these issues in this episode, which concerns the purchase of an NFT image for commercial use.

    Further reading:

    A useful introduction to NFTs, together with links for further reading, can be found on the Ethereum website here: https://ethereum.org/en/nft/.An entertaining explanation of the phenomenon of digital art NFTs can be found here: https://www.theverge.com/22310188/nft-explainer-what-is-blockchain-crypto-art-faq.For those who want to dig into the detail, the most commonly used technical standard for NFTs is currently ERC-721, which is explained here: https://ethereum.org/en/developers/docs/standards/tokens/erc-721/.
  • AI companies need to engage with the ethical implications of their systems. That involves planning ahead: in this episode, we therefore look at the European Union’s proposed AI regulation, and – with the help of our guest, Patricia Shaw – discuss its application in an EdTech context. The proposed regulation is available here.

    Patricia Shaw is CEO of Beyond Reach Consulting Ltd, providing AI/data ethics strategy, public policy engagement, bespoke AI/data ethics risk and governance advice, and advisory board services, across financial services, public sector (Health- and EdTech), and smart cities.

    Trish is passionate about Responsible AI and is an expert advisor to IEEE’s Ethical Certification Program for Autonomous Intelligent Systems and P7003 (algorithmic bias) standards programme, a Fellow of ForHumanity contributing to the Independent Audit of AI Systems. She contributed to The Institute for Ethical AI in Education’s Ethical Framework for AI in Education, and is a Fellow of the Royal Society of Arts having been on the Advisory Board for the ‘Power over information’project concerning regulation of online harms.

    A non-practising Solicitor, public speaker, and author, Trish is also Chair of the Trustee Board of the Society for Computers and Law, Member of the Board of iTechlaw as well as Vice Chair of their AI committee. She is listed on 2021 - 100 Brilliant Women in AI Ethics™.

  • Where a contract confers a discretion on one party that materially affects the rights of its counterparty, the discretion must be exercised rationally. The Supreme Court held in Braganza v BP Shipping Ltd [2015] UKSC 17 that exercising a discretion rationally involves (i) taking the right things (and only the right things) into account, and (ii) avoiding a decision that no reasonable decision-maker could have reached. In this episode, we explore how those principles might operate in the context of a discretion exercised automatically by a machine learning algorithm. We do so in the context of a fraud detection algorithm and an online farmers' market somewhere in East Anglia.

    Further reading:

    This episode was inspired by Tom Whittaker's thought-provoking article on the case of TF Global Markets (UK) (trading as ThinkMarkets)) v Financial Ombudsman Service Limited [2020] EWHC 3178 (Admin). The article may be found here: https://www.lexology.com/library/detail.aspx?g=ad5569ea-af1a-4040-b596-a6a29b3c73b0Supreme Court decision in Braganza v BP Shipping Ltd [2015] UKSC 17: https://www.supremecourt.uk/cases/uksc-2013-0099.htmlAnyone with any doubts as to the prevalence of AI-based fraud detection systems might like to do this: https://letmegooglethat.com/?q=ai+fraud+detection. There is no problem in principle with using such tools. The issue (in a contractual context) is how their outputs are translated into discretionary decisions.
  • Fully autonomous vehicles may be a few years away, but cars offering so-called “eyes off/hands off”, or “Level 3” automation, whereby the car is sufficiently capable that the driver’s role is limited to taking over control when requested by the car to do so, is expected to be commercially available in the very near future. In this episode we flash forward to summer 2023 and an accident involving a Level 3 autonomous vehicle. We consider how existing legal frameworks cope with the liability issues that arise when AI takes control of the driving but where the driver remains in the safety chain as a fallback for when the automation cannot cope.

    Further reading:

    Riley-Smith QC and McCormick, ‘Liability for Physical Damage’ in The Law of Artificial Intelligence (2020), which helped to inspire our scenario.Glassbrook, Northey and Milligan, A Practical Guide to the Law of Driverless Cars (2019).The hack involved in this scenario: https://interestingengineering.com/teslas-autopilot-can-be-tricked-in-just-a-split-second-through-this-method
  • AI can improve how businesses make decisions. But how does a business explain the rationale behind AI decisions to its customers? In this episode, we explore this issue through the scenario of a bank that uses AI to evaluate loan applications and needs to be able to explain to customers why an application may have been rejected. We do so with the help of Andrew Burgess, founder of Greenhouse Intelligence ([email protected]).

    About Andrew: He has worked as an advisor to C-level executives in Technology and Sourcing for the past 25 years. He is considered a thought-leader and practitioner in AI and Robotic Process Automation, and is regularly invited to speak at conferences on the subject. He is a strategic advisor to a number of ambitious companies in the field of disruptive technologies. Andrew has written two books - The Executive Guide to Artificial Intelligence (Palgrave MacMillan, 2018) and, with the London School of Economics, The Rise of Legal Services Outsourcing (Bloomsbury, 2014). He is Visiting Senior Fellow in AI and RPA at Loughborough University and Expert-In-Residence for AI at Imperial College’s Enterprise Lab. He is a prolific writer on the ‘future of work’ both in his popular weekly newsletter and in industry magazines and blogs.

    Further reading:

    ICO and The Alan Turing Institute, ‘Explaining decisions made with AI’ (2020)ICO, ‘Guide to the General Data Protection Regulation (GDPR)’ (2021)The Data Protection & Privacy chapter in The Law of Artificial Intelligence (Sweet & Maxwell, 2020)An explanation of the SHAP and LIME tools mentioned by Andrew can be found at https://towardsdatascience.com/idea-behind-lime-and-shap-b603d35d34eb, and a deeper explanation for the more mathematically minded can be found here: https://www.kdnuggets.com/2019/12/interpretability-part-3-lime-shap.html
  • This podcast explores the benefits and limitations of Smart Contracts in the context of human-provided services by considering the practicalities of using Smart Contracts to regulate the contractual relationship between brands and social media influencers.

    Further reading:

    For a more detailed discussion of smart contracts, see the Legal Statement on cryptoassets and smart contracts, November 2019, published by the LawTech Delivery Panel UK Jurisdiction Taskforce (downloadable here).If you’d like to see one of the starting points - N. Szabo, “Smart contracts: building blocks for digital markets.” (1996) EXTROPY: The Journal of Transhumanist Thought, (16), 18(2), available in revised draft here. For some insight into the vulnerabilities – see, e.g., Singh et al ‘Blockchain smart contracts formalization: Approaches and challenges to address vulnerabilities’, Computers & Security 88 (2020) 101654.A promising application of Smart Contracts is in supply chains – discussed here in the Harvard Business Review. Our scenario was inspired by a CoinDesk article on 22 January 2015 (‘Ex-Rugby Star: Smart Contracts Could Prevent Legal Disputes in Sport’).
  • This podcast, which continues the story started in Episode 3, considers the liability implications arising out the exfiltration of a client's confidential technical data following a cyber attack.

    For further reading, we recommend:

    Decode.legal's excellent primer on Cybersecurity for Lawyers.Schneier's "click here to kill everybody", a thought provoking discussion of present and future cyber attacks and their consequences.
  • This podcast explores some immediate actions that might be needed where an apparent ransomware attack threatens the leak of highly confidential technical data. For this podcast, Matthew and Iain are joined by Michael Brown of Northwall Cyber.

    For further reading, we recommend:

    Decode.legal's excellent primer on Cybersecurity for Lawyers.Schneier's "click here to kill everybody", a thought provoking discussion of present and future cyber attacks and their consequences.
  • This podcast explores the legal remedies that are available when Bitcoin are misappropriated, and some of the practical challenges of enforcement. For a more detailed analysis of the treatment of cryptoassets as property, see the Legal Statement on cryptoassets and smart contracts, November 2019, published by the LawTech Delivery Panel UK Jurisdiction Taskforce (downloadable here).

    The legal basis for recovery of fraudulently misappropriated Bitcoin and factors that are engaged on an interim application are addressed in AA v Persons Unkown, Re Bitcoin [2019] EWHC 3556 (Comm).

    In relation to crypto-assets as property in other common law jurisdictions, see:

    Ruscoe v Cryptopia Ltd (In Liquidation) [2020] NZHC 728 (New Zealand).B2C2 Ltd v Quoine Pte Lrd [2019] SGHC(I) 03 (Singapore).

    For a detailed explanation as to the technical workings of Bitcoin, see Bitcoin: A Peer-to-Peer Electronic Cash System, Nakomoto, 2008 (available here).

    Note: in an earlier version of this podcast, the exchange was unable to transfer Alice's bitcoin back to her because it did not have access to the fraudster's private keys. In practice, however, the majority of mainstream exchanges do control account holders' private keys (allowing them to treat "deposited" bitcoin as fungible). Therefore we have altered the scenario to reflect the more common case (albeit at the expense of illustrating a conundrum we think rather interesting!).

  • This podcast is intended as an introduction to issues that arise when an AI bot creates defamatory content. For detailed commentary on this specialist area of law, see: Gatley on Libel and Slander (12th Ed, 2017) and Duncan and Neill on Defamation (4th Ed, 2015 – with new addition forthcoming). For an overview, see our chapter on ‘Liability for Economic Harm’ in The Law of Artificial Intelligence (2020, forthcoming).


    Cases relevant to auto-generated content include:

    Bunt v Tilly [2006] EWHC 407 (QB)Metropolitan International Schools Ltd (trading as Skillstrain and/or Train2Game) v Designtechnica Corpn (trading as Digital Trends) and others [2009] EWHC 1765 (QB)Tamiz v Google Inc. [2013] EWCA Civ 68 CA

    For other jurisdictions, see e.g. Defteros v Google LLC [2020] VSC 219 at [40], in which Richards J summarised the Australian position as follows: “The Google search engine … is not a passive tool. It is designed by humans who work for Google to operate in the way that it does, and in such a way that identified objectionable content can be removed, by human intervention, from the search results that Google displays to a user.” For Hong Kong, see e.g. Yeung v Google Inc. [2014] HKCFI 1404; Oriental Press Group Ltd v Fevaworks Solutions Ltd [2013] HKFCA 47 (especially [76] for a test endorsed by the authors of Gatley).

    On the contradictory positions taken by search engines worldwide, see, e.g., Sookman, “Is Google a publisher according to Google? The Google v Equustek and Duffy cases”, C.T.L.R. 2018, 24(1).