Technology and legal services

11 December 2018




Introduction

The legal market faces challenges. People and small businesses who use legal services need better access to more affordable legal services. We know that many people find it difficult to identify that they have a legal problem. The cost of legal services can be a barrier to people receiving professional advice. Smart use of new technologies can help law firms address this unmet legal need by reducing costs and helping people find information. For law firms that serve large and often global businesses, there is increasing pressure to do more for less and an increasing expectation that the use of technology will make legal work quicker and more efficient.

We know that firms are increasingly using technology to deliver better services to clients. This report gives an overview of how technology is helping to drive innovation in legal services and gives real-life examples.

It shows how artificial intelligence (AI) is already being used to improve and enhance – not replace – the work of human lawyers. And it discusses how we can help those we regulate to take advantage of the new technologies.

Our regulation is based on the outcomes that firms achieve, not on the tools that firms use to meet them. The duty of confidentiality, for instance, applies to an email just as it applies to a letter or conversation. Although we do not intend to impose specific rules on how firms should use AI, the SRA Principles and Code of Conduct still apply to firms using AI.

One major difference between AI and older computer technologies is that AI can learn and develop. Just as firms train, supervise and review the output of their trainees and other staff, so they should train, supervise and review the output of intelligent machines.

This paper includes information on:

  • the advantages of using advanced technology in legal services
  • how technology and AI is already being used in legal services and the ethical issues to consider
  • the role regulation plays in encouraging the use of new technology
  • the steps you can take to make sure the technology is used safely, which we have written in conjunction with the National Cyber Security Centre (NCSC).

We hope that you will find it useful to see our view of the opportunities and risks from the technological possibilities.

Open all

AI refers to software systems that can interpret data in ways that would normally need human involvement. It is loosely defined as machine learning that can improve its own capabilities without needing humans to reprogram it.38 This allows the system to process information more quickly and accurately. AI systems are generally focused on specific tasks and aim to assist and enhance performance. They enhance human judgment and intelligence, rather than replace the need for it.

"Automated machines collate data – AI systems understand it."
Microsoft39

The use of AI has great potential to increase business efficiency, with advantages for both firms and their clients. Its use across the UK has the potential to add £630bn to the economy by 2035.40 And it is predicted to create 14.9m new jobs by 2027.41

Commercial use of machine learning is a relatively recent innovation. Their use in the law is, therefore, still in its early stages. We can, however, already see signs of the impact that these intelligent systems will have.

Uses of AI in law

Many legal activities need law firms to produce and check very large amounts of information accurately. This can be very time consuming, which makes it extremely expensive. It also needs the understanding of a trained lawyer, leaving them unavailable for other tasks. AI has the potential to take over this sort of work once the system has been trained appropriately.

AI can support evidence-based decision-making. It can help determine the chances of success in litigation, support the disclosure process and is also applicable in areas of law such as commercial conveyancing. For example, AI contract review tools have the capability to read and understand several leases about one client and pull out and risk assess important terms. It frees lawyers to spend time engaging with clients and on tasks that need human skills.

As with many new technologies, firms may have initial uncertainty around the use of AI. This may soon give way to a perception that it is less efficient or riskier not to use it.

Around 40 of the 100 biggest UK firms are already using AI systems on active files, quadruple the number doing this two years ago. Around 30 further top firms are currently piloting systems, and still more are considering a pilot.42

AI systems have been developed and applied in areas that include:

  • document reviews, such as contract reviews and discovery
  • conflict checks and due diligence
  • identifying precedents
  • legal research and analytics
  • predicting case outcomes
  • billing.

Automated review

AI platforms can interpret and review documents such as commercial loan agreements or corporate contracts. This can save considerable time and expense. One large accounting firm states that their own contract analysis system has saved 360,000 hours of their lawyers' time each year with better accuracy. It can review documents in seconds rather than hours.43

AI can also make due diligence and other compliance tasks more efficient. One of these systems is being used to halve the time taken on due diligence work.44 Rather than simply searching for critical terms, the system 'reads' documents and can detect missing information. The main reason for introducing it was to process a high volume of data rather than productivity enhancement.45

Document discovery is another very time-consuming task that has the potential to be automated with AI. A US law firm used this technology in 2016 to draw out the potentially relevant information from 29m documents.46 AI discovery may reduce the chance of missing an important piece of evidence. It may also make the facts in the case clearer at an earlier stage, helping to reach an appropriate outcome faster and making settlement more likely.


Case example: An algorithmic competition

A legal technology company organised a contest to test their AI against human legal performance. The competition gave 20 experienced lawyers and an AI system the task of detecting issues in contract clauses. The company chose lawyers who had significant commercial experience in this task, and the contracts they chose were five publicly available non-disclosure agreements.

Both humans and the AI were highly accurate, with the AI performing as well as the highest scoring lawyers in the exercise. The human lawyers, however, took 92 minutes on average to complete their reviews while the AI took 26 seconds.


Case example: AI-powered legal privilege review

At the start of 2018, the Serious Fraud Office (SFO) used AI to process more than half a million documents in a day as part of an investigation. This was 2,000 times faster than a human lawyer.47 The SFO have decided to use an AI tool for use on all new casework.


Case example: Predicting outcomes

Machine learning can analyse huge amounts of historical reference data to identify patterns and relationships, giving insights on future outcomes. This can be used to assess the chances of winning a case, to predict the other party's likely strategy, or to value a settlement.

The ability to predict the outcome and likely settlement value of a case at the outset should contribute to reaching appropriate settlements faster. By giving a better overview of the likely scale and complexity of a case, it should also make cost and time estimates more accurate.


Case example: A predictive competition

A legal tech start-up company staged a 'human versus machine' challenge to predict the outcomes of real payment protection insurance (PPI) cases received by the Financial Ombudsman. The result was a comfortable win for the AI system, which was accurate in 86 percent of cases compared to 62 percent for the humans. The team behind the AI stated that their system's victory may have been due to a better grasp of non-legal factors that contributed to the outcome of cases.


Case example: Legal operations platforms and improved chatbots

Firms have teamed up with universities and software companies to develop AI software for use in legal services. One alternative business structure (ABS), in conjunction with a US software company and based on university research, developed software to implement global end to end legal operations platforms. This technology aims to support in-house legal teams to make quicker and better decisions, through automation of workflows, data and document generation, case and document management, document storage, reporting and systems integration.

Improved chatbots use AI for natural language understanding and self-learning. This lets them give automated expert advice on more complex situations. One example allows two parties to work together to create agreements, offering expert help when the parties need it. The advice that the parties receive is impartial because the AI is not a human acting for one party. This means that both sides can use the system at the same time without conflict concerns. This shows how AI systems can help solve problems efficiently and collaboratively.

Ethical issues from the use of AI

The question of how to use AI in professional practice while meeting ethical standards has been the subject of some debate.48 Some issues that could arise are:

  • self-learning systems that have direct interactions with clients are in some ways carrying out the function of a lawyer without a human directly present
  • a firm that operates a chatbot to give basic advice online may not be able to identify all the individuals who the system is advising on its behalf
  • a self-executing contract in conveyancing may involve the system carrying out a reserved activity without requiring human supervision
  • the results of a neural network's analysis may be hard to verify, and it may be hard to understand how the system has reached the conclusions that it has

It is important for those using AI systems to be sure that they are doing so in a way that is consistent with their professional duties. (See also, the section on 'Transparency and bias').

The European Commission has set up an AI Alliance, in partnership with the experts on its High Level Group on Artificial Intelligence, to establish draft guidance on the ethical use of AI.49 The alliance allows members to contribute their views while accessing official documents on the subject of AI and adding reports to an open library. The Alliance is likely to produce its draft guidance by the beginning of 2019.

As the use of AI in businesses is relatively new, the ethical issues are still emerging. As with many other issues of innovation, solicitors taking up AI in their work will need to apply the Principles and their own ethical judgment to resolve issues that come up. We discuss some of the issues below and will continue to update the advice that we give as the scope and use of AI develops.

Protecting data

The General Data Protection Regulation (GDPR) applies to personal data being used in the big data models that drive AI. The Information Commissioner's Office (ICO) has set out their views on the implications of big data, AI and machine learning for data protection.50

When a firm is training an AI system, they may need to use data from one client's case, so the AI can understand other clients' cases. Solicitors gain expertise by learning from cases and applying that knowledge to others, and the same applies to AI. They must still protect the confidentiality of client data and avoid conflicts.

Some firms may wish to work together and with outside technology companies to produce effective AI. They must determine how best to protect client confidentiality and meet their professional obligations in these cases. It may help to anonymise and aggregate the data and make sure that clients have given their consent to how their data will be used.

All businesses holding sensitive data need to protect it. This is particularly the case in solicitors' firms given their additional duties of confidentiality and legal privilege. AI can be used to assess compliance with GDPR.51

Our paper, IT Security: Keeping information and money safe, gives more information on the risks, and how to make sure you have effective controls in place. And the ICO's and accessible guidance helps small businesses comply with the GDPR. The Law Society has a guide for solicitors on GDPR compliance.

Transparency and bias

It is important that firms are transparent with their clients about how they are using technology, particularly where issues of data processing and confidentiality are concerned.

Currently AI needs a human to operate it, interpret and confirm its results and quality control the system. As more of the processing work supporting legal services is completed by computers, firms need to be careful that the work remains transparent and to be aware of the associated ethical challenges.52

With GDPR, firms must be able to tell people how their data is used. Firms also need to be able to explain the assumptions and reasoning behind some automated decisions. They may find this difficult where the decisions are made by self-learning AI. This requires some level of expertise and firms will need to invest in this resource. The ICO expects "algorithmic accountability and auditability". In other words, they expect firms to be able to show that their algorithms comply with GDPR.53 And we expect the same from firms – they need to be able to demonstrate that their advice is competent, fair and compliant with their other obligations, such as confidentiality and conflict obligations.

Without transparency, AI is more likely to develop biases without the operator realising this. There have been studies54 as well as real world examples of how unwanted biases exist and develop in sophisticated algorithms.55 For example, a facial recognition system may fail to recognise all individuals it is supposed to identify if it has only been trained on a single ethnicity.

To address this problem, a group of technology companies including IBM, Microsoft, Facebook and Google formed a partnership in 2017.56 The group works to recognise and minimise bias.

Solicitors adopting these technologies must make sure that they monitor the results. They must train them in a way that makes sure that the Principles and Code of Conduct are followed. It is important to monitor the performance of a system on an ongoing basis. In many ways, this is a similar task to that of introducing and supervising a trainee.

One advantage of AI is that checking and testing its workings can be an easier task than analysing human thinking, particularly if the AI has been set up well. While the AI systems used by lawyers can have biases, misunderstandings and errors in the same way as those who train and use them, it can be simpler to identify and correct them.

Open data

AI technology depends on the availability of data for efficiency and development. It is particularly effective when large systems can communicate with one another. The Government recommended in 2017 that data held by public organisations should be made open wherever the risks allow.57

We are promoting open data in our 'Better information, more choice' reforms. We will publish more of the regulatory information we hold on firms including a digital register. Firms will also need to make a range of information available, such as prices and the services offered on their websites. The Legal Services Board has approved plans to publish price information for some areas of law, including conveyancing, probate, and small business debt recovery.

Quality assurance and testing

Bias is not the only way that an AI model could produce poor results. The system's decisions or other outputs may not be accurate if:

  • a firm uses the wrong type of analytical system for a task
  • the system is trained using poor or incomplete data.

As with any type of computer software, there will also be many different types of AI on the market, some of which may vary in quality.

It can be hard to see why AI systems reach the conclusions that they do. Because of this, problems in their reasoning may only become apparent later. By that time, they may have affected the advice that the firm has given or its decisions in litigation. As these systems can learn very quickly, problems in their reasoning can also appear rapidly.58

It is important that firms have a structured quality assurance programme for the systems they buy or use. They should test their systems before using them across their operations, for instance:

  • piloting a chatbot system before rolling it out to the public
  • testing automated discovery on archived files before using it for new cases.

As with avoiding bias when training a system, this testing and assurance should be an ongoing process. Supervisors would assess trainees' work at regular intervals to make sure they are maintaining quality and ethical standards, and they should do the same for AI.

Ethical responsibility and the scope of regulation

Our regulation is based on the outcomes that firms achieve, not the tools that they use to achieve them. For example, we expect firms to give a competent and timely service to their clients, but we would not try to tell them which case management method they should use to do this.

We do not intend to impose any specific rules on the use of IT or AI. It is not for us to say which AI systems firms should buy.

We regulate all activities carried out by recognised bodies and sole practitioners. With ABSs, we regulate all their legal activities and any other activities specified on their licence. If a firm we regulate is using advanced technologies in giving a service that we regulate, then we will regulate that activity on the same basis as any other. This includes where the firm is automating the activity concerned.

Individual solicitors and firms are responsible for the service they give to people, including whether they use technology to advise clients or use it to work on client matters. They cannot outsource this responsibility to a third party. If there is an error or flaw in an AI system run, or provided by, a separate technology company then we are unlikely to take regulatory action where the firm did everything it reasonably could to assure itself that the system was appropriate and to prevent any issues arising. People will of course be able to seek redress in the usual way if they have suffered a loss or detriment, such as taking their complaint forward to the Legal Ombudsman or making a negligence claim.

We want to see firms using AI and other IT to improve their work and help their clients. It is still, of course, important that they protect client data and money while maintaining their other obligations. We have joined up with the NCSC to create the following list of the best tips for staying safe while using IT.

General principles

Cyber security does not have to be complicated. Following the advice in this section should help you keep your clients and your own business safe. No guide can guarantee to protect you from all threats, but sensible practices can reduce the chance of a successful attack.

  • Have sensible and pragmatic security arrangements to support you and your staff while you use IT systems.
    • Security that interferes with your, and your staff's, ability to work is bad security.
  • No security measure is completely reliable, and attackers will sometimes succeed.
    • Have a plan to recover from attacks, and to be able to detect when they have happened.
    • Try to minimise the harm that a single breach could cause.

Maintain your system

Keeping your IT equipment up to date is one of the most important and effective things you can do to improve your security. Software developers will update programs on a regular basis to fix vulnerabilities that they have discovered. Your systems are vulnerable until you patch them by installing the update.

  • Keep your systems and devices up to date.
  • Once a system is no longer supported by its manufacturer and is no longer kept updated, you should replace it.

You also need to make sure that your system and any tools you use are properly defended and that this defence is effective.

  • Use antivirus software on all your desktops and laptops.
    • Antivirus software is included for free in most popular operating systems.
    • Mobile phones and tablets may not need separate antivirus software. The NCSC's gives more detail about this.
  • Make sure that your system has a firewall, which creates a defence between your own network and the internet.
  • Most popular operating systems include a firewall, but you should check that it is switched on and working.

Backup your data

You need to protect not only your clients' confidential information, but you also need to protect your own data to make sure that you can keep operating after an incident. Part of this protection involves keeping effective and regular backups.

  • Backup your important data to protect it from loss due to an accident or a ransomware attack.
    • Identify what you need to keep.
    • Make sure that your backup system is not permanently connected to the device holding the original copy, to preserve it if there is a ransomware attack.
    • Make sure that access to backup systems is restricted.
    • A backup on the cloud, or in remote storage you control, would be a useful addition to a local backup system
    • Cloud storage is affordable and can represent a simple solution.
  • Make sure that you make backups frequently enough for them to be useful.
    • Automated backup systems can help make backing up a part of your everyday activities.
  • Make sure you know how to restore your system from a backup.

For more information, the NCSC produce more detailed advice on backup systems.

Working on the move

The ability to work on the move is a major advantage brought by IT, but you need to protect confidential information. As mobile devices such as smartphones are not used only in the safety of the office, they need careful protection.

  • Encrypt laptops and install a system to track and delete data from tablets and phones remotely if they are lost or stolen.
    • Someone who is not authorised to access a device or system should not be able to access information on it or use it to access working systems.
    • A suitable PIN or complex password will protect your device, and many devices include fingerprint recognition.
    • The NCSC give more detailed advice on how to protect mobile devices.
  • Be careful about who can see or overhear what you are doing when working with sensitive information.
    • Screen protectors are available which can help stop people from reading over your shoulder.
  • Public wi-fi hotspots can be insecure, and it is hard to prove that a hotspot belongs to who it claims.
    • The best option for remote internet access is to use your device's mobile 3G or 4G network, which will have built-in security.
    • Devices without built-in access to mobile networks can use tethering, which shares the connection from another device, or can use a dongle from the mobile provider.
    • If you must use public wi-fi, then you should use a virtual private network from a reputable provider, which secures your data.

Access controls

Your clients' confidential information and your own business data should be accessible to you but not to anyone you have not authorised. When set up correctly, passwords are a free, easy and effective way to secure your systems.

  • Use two-factor authentication for log-ins where possible:
    • Two factor authentication means systems that need two different methods to prove identity before they allow access, for instance a password combined with a code sent to a smartphone.
  • Make sure that you and all staff avoid predictable passwords, using longer strings of characters that cannot be easily guessed.
    • Make sure your staff have access to good guidance on choosing passwords that are easy to remember but hard to guess.
    • The NCSC give advice on how to choose a non-predictable password.
    • It is acceptable not to change passwords frequently if they are secure to begin with.
    • Change all passwords if you have reason to suspect any breach in the system.
    • Your system should not need staff to share passwords or accounts to do their jobs.
  • If staff need multiple passwords, consider whether a password manager could help.
    • Password managers create and store passwords for you and secure them with a master password.
    • As the master password gives access to all the others, it will need to be a strong one, such as a combination of three random words.
  • Control access to removable media such as datasticks.
    • Datasticks can be a channel for malware infection and can also be used to steal data.
    • You can reduce the need for datasticks by using email or cloud storage to transfer data.
  • Do not use an administrator account on your system (a user account with the privilege to access others' accounts or install new software) for regular work.
    • Reserve administrator accounts for when the system needs to be maintained.
    • Reserve them for use by those whose responsibilities require them to maintain the system.

Phishing

Phishing attacks are when criminals use fake communications to try to steal information or money. These are the main cyber security threat to solicitors and their firms at present. Whatever your firm's size or type of work, you will receive phishing attempts. Some of these attacks will get past even the most observant users.

  • Consider why criminals might want to target your organisation to help decide how to prevent this.
  • Make sure your staff know your normal operating procedures, particularly considering interactions with other organisations, so they are in a better place to spot requests that are out of the ordinary.
  • Make sure you have a process for staff to seek help if they are unsure.
  • Decide what you and your staff should do with a communication that may be a phishing attempt.
  • Many phishing attempts impersonate a senior member of your staff and aim to get a junior staff member to send money or information urgently. A clear reporting line for genuine instructions will help to protect from this.
  • It can help to consider how your communications appear to your clients, suppliers and other firms, to make sure that your own communications are not mistaken for phishing emails and do not leave others vulnerable.
    • Including a routine warning note that your bank details will not change at any point in a transaction will help to prevent email modification frauds.

It would be unrealistic to expect staff to identify all phishing emails. However, there are common warning signs.

  • Some phishing scams use poor spelling or grammar or are not of the level of quality or design expected from the sender they are impersonating.
  • Less targeted phishing emails may address you as "valued customer" rather than by name.
  • Be suspicious of invoices that relate to services you are not aware of having ordered, or of emails that claim you have been a victim of some form of crime.

The NCSC give more detailed advice on how to protect yourself from phishing.

Training and testing

It is important that all your staff understand how to use IT systems safely. You should make sure that they know why you have put security measures in place.

  • Use training to help build a culture of reporting, where staff feel comfortable coming forward with issues that they have encountered.
  • Test security systems to make sure that you are confident that they are working and that you know what to do in the event of an incident. If you are using advanced systems such as AI, be aware that algorithms can contain errors or hidden biases.
    • Test your proposed system against a variety of situations to make sure you are satisfied before using it for real cases.

For more information

The NCSC offers more advice and information about using IT safely on their website. They have produced a specific guide on security for smaller businesses, for who they also recommend the ICO's guidance on information security.

Small businesses may wish to take up certification under the Cyber Essentials scheme. For larger businesses and those who feel they may be at particular risk of cybercrime, the 10 Steps to Cyber Security schemecan help to develop their defences further.

  1. How law firms measure against other sectors, Peppermint Technology, 2016
  2. How law firms measure against other sectors, Peppermint Technology, 2016
  3. It's time for change as competition stiffens and IoT era evolves - PwC Law Firms' Survey, Price Waterhouse Cooper, 2017
  4. The future of the professions, Oxford Today, 2015
  5. Improving access - tackling unmet legal needs, SRA, 2017
  6. What is the technology needed for access to justice?, Law, Technology and Access to Justice, 2017
  7. Priority risks, Solicitors Regulation Authority, 2018
  8. Innovation in legal services: a report for the Solicitors Regulation Authority and the Legal Services Board, ERC, 2015
  9. Unbundling a market: the appetite for new legal services models, Allen & Overy, 2014
  10. Tracker survey 2018: how consumers are using legal services, Legal Services Consumer Panel, 2018
  11. Government acts to improve the home buying process, Press release by Department for Communities and Local Government and The Rt Hon Sajid Javid MP, 2017
  12. Government acts to improve the home buying process, Press release by Department for Communities and Local Government and The Rt Hon Sajid Javid MP, 2017
  13. Online court pilot begins amid warnings, New Law Journal, 2018
  14. Online dispute resolution for law value civil claims, Online Dispute Resolution Advisory Group, 2015.
  15. Making peace on eBay: resolving disputes in the world’s largest marketplace, ACResolution, 2008
  16. MyLawBc: my problem, my solution, MyLawBc, 2017
  17. Modria and the future of dispute resolution, Bloomberg Law, 2015
  18. Shieldpay claims UK’s first fully digital mortgage settlement, Finextra, 2018
  19. This week in legal tech: exclusive results of a new small firm survey, Above the Law, 2016
  20. What impact could technology have on the law firm-client relationship?, LexisNexis, 2017
  21. AI is doing legal work, but it won’t replace lawyers yet, New York Times, 2017
  22. The future computed, Microsoft, 2018
  23. The future computed, Microsoft, 2018
  24. Annual report 2017–2018, Legal Ombudsman, 2018
  25. Bellwether report: the race to evolve, LexisNexis, 2017
  26. Law firms of the future—virtual lawyers, LexisNexis, Oct 2017
  27. Smart contracts on the blockchain: can businesses reap the benefits?, Forbes, 2017
  28. Online conveyancer claims blockchain based transaction first, Legal Futures, 2017
  29. HM Land Registry signals the start of its transformation, HM Land Registry, 2017
  30. About us, LexSnap, 2018
  31. Meet Billy Bot, the robot who is also a barrister’s clerk, UnHerd, 2017
  32. Exclusive: here comes Billy, the robot junior clerk, Legal Futures, 2017
  33. Mishcon de Reya launches technology incubator, Mishcon de Reya, 2017
  34. Linklaters partners with Accord Project to offer clients new smart contracts, Linklaters, 2018
  35. The Linux foundation is the world’s largest non profit organisation for open source software. Its largest project is the Linux operating system, which is the leading operating system for servers, embedded systems and mainframe computers and which has the largest installed base of any general purpose computer operating system because of its use as the basis of the Android mobile phone system.
  36. 'Law for Good' hackathon a complete success, Hackney Community Law Centre, 2016
  37. Services Automation For Large Law Firms: Our Consortium, Reynen Court, 2018
  38. The business of artificial intelligence, Harvard Business Review, 2017
  39. The future computed, Microsoft, 2018
  40. Growing the artificial intelligence industry in the UK, Department for Business Energy and Industrial Strategy, 2017
  41. The future computed, Microsoft, 2018
  42. Legal bots / expert systems – rise of the robots, Times Brief, 2017
  43. JPMorgan software does in seconds what took lawyers 360,000 hours, Bloomberg.com, 2017
  44. Robots aren't destroying middle-class jobs yet — just doing all our boring paperwork, Business Insider, 2017
  45. Robots aren't destroying middle-class jobs yet — just doing all our boring paperwork, Business Insider, 2017
  46. Artificial intelligence disrupting the business of law, Financial Times, 2016
  47. AI powered ‘Robo-Lawyer’ helps step up the SFO’s fight against economic crime, Serious Fraud Office, 2018
  48. AI and risk management: innovating with confidence, Deloitte, 2018
  49. Commission appoints expert group on AI and launches the European AI Alliance, European Commission, 2018
  50. Big data, artificial intelligence, machine learning and data protection, ICO 2018
  51. Informatica brings AI to GDPR compliance, data governance, IT.world, 2017
  52. What will it take for us to trust AI?, Harvard Business Review, 2016
  53. AI and risk management: innovating with confidence, Deloitte, 2018
  54. Biased bots: Human prejudices sneak into AI systems, University of Bath, 2017
  55. Forget killer robots—bias Is the real AI danger, MIT Technology Review, 2017
  56. Introduction from the founding co-chairs, Partnership on AI, 2018
  57. Growing the artificial intelligence industry in the UK, Department for Business Energy and Industrial Strategy, 2017
  58. AI and risk management: innovating with confidence, Deloitte, 2018
  59. Looking to the future: flexibility and public protection – a phased review of our regulatory approach, SRA, 2016
  60. SRA takes further steps to promote innovation, SRA, 2018
Print page to PDF