We can’t go a day without seeing generative AI programs, like ChatGPT, in the news. There is a lot to like about ChatGPT – it offers efficiencies such as automating administrative processes in law firms and chatbots that can help save time responding to emails. Smith + Malek hasn’t officially rolled out AI in the company, but some of us are playing around with it, and while productivity is enticing, we’re left asking important questions regarding privacy and ethics. Let’s explore the intersection of automation, privacy concerns, and ethical considerations when employing ChatGPT in a law firm setting.
A New Way to Google
Like everyone, lawyers use Google to help research and answer questions. Using that familiar framework, one of our ChatGPT beta testers likened generative AI to Googling. “Using ChatGPT as that [research] starting point is useful,” he said. “The information is presented to you, and then you can prompt it for more details and links.”
The most critical part of this statement is the follow-up prompt for links, which allows the user to comb through the material that ChatGPT has fed them. In the legal profession, there are already horror stories circulating about attorneys who have cited fake cases entirely made up by ChatGPT in legal briefs. Don’t be that guy; ChatGPT responses need to be verified.
But it can be helpful in terms of time-savings. Administrative tasks, such as drafting a form lease agreement, and answering common client inquiries are time-consuming. Smith + Malek has been using technology to help automate these processes since our founding eight years ago. Will ChatGPT expedite the common forms and FAQ responses our business requires? No, not until privacy concerns have been addressed.
Privacy Concerns
Privacy is a fundamental aspect of the legal profession. Any technology we use must uphold client confidentiality and data protection. As such, when utilizing ChatGPT, law firms must be overly cautious about what information is shared. OpenAI, the organization behind ChatGPT, has a privacy policy that explains how the program uses your personal information as well as your log data for product development.
Law firms must exercise due diligence to ensure compliance with data protection laws and maintain client trust. Some guidelines we suggest for ChatGPT curious attorneys include:
Data Minimization: Only share necessary information with ChatGPT, avoiding the inclusion of sensitive or confidential client data, such as names.
Encryption and Secure Storage: Ensure that all communication and data stored within the ChatGPT system are appropriately encrypted and stored securely to prevent unauthorized access. You can also turn off chat history so that prompts are not stored and your conversations are not used to train the ChatGPT model.
Ethical Considerations
Right now, the world is wrestling with policy around ChatGPT. It’s too new to know how to regulate it and much of this confusion stems from ethical questions. We don’t have those answers, either, but we have been thinking about the ethical implications a law firm must navigate in a world where ChatGPT is present. Here are some key aspects to ponder:
Consent and Transparency: Clients deserve to know about the use of ChatGPT in a firm’s administrative processes. But is there a line where informed consent should be required?
For example, what if you learned that this blog post was written by ChatGPT? Would it be different to learn that it wrote 30% of it, 50% of it, or 90% of it?
Don’t worry, real humans write this bog. But those rhetorical questions start to illuminate the importance of consent and transparency with the use of AI. While ChatGPT can automate processes, accountability ultimately rests with the law firm.
Professional Responsibility: While ChatGPT can assist in administrative tasks, legal professionals should always review and verify the generated outputs from the AI system to maintain the integrity of legal advice.
Bias and Fairness: ChatGPT is trained on vast amounts of text data, which can inadvertently introduce biases. Monitoring and addressing those biases in ChatGPT’s responses is crucial to ensure fair treatment of clients, and avoid perpetuating systemic disparities. (See importance of human oversight, above.)
Conclusion
Automating administrative processes through ChatGPT offers law firms increased efficiency and reduced workload. However, firms must navigate privacy and ethical concerns before harnessing these benefits. Striking a balance between automation and human oversight is the first step to leveraging the power of AI so that core principles of privacy and ethics are not compromised.