Skip Navigation

Lancaster Tech Law founder discusses benefits and risks of artificial intelligence

  • By Chris Reber/LNP | LancasterOnline
Brandon Harter, of Lancaster Tech Law, talks about artificial intelligence during an interview in his East Hempfield Township office Thursday, Dec. 28, 2023. Harter has been advising the local business community on emerging issues related to AI.

 Blaine Shahan / LNP | LancasterOnline

Brandon Harter, of Lancaster Tech Law, talks about artificial intelligence during an interview in his East Hempfield Township office Thursday, Dec. 28, 2023. Harter has been advising the local business community on emerging issues related to AI.

As founder of Lancaster Tech Law, attorney Brandon Harter helps companies deal with legal cases involving data breaches, software licensing, copyright infringement and other tech-related issues.

With the rise of chatbots like ChatGPT, clients at the East Hempfield Township-based law firm have had more legal questions involving artificial intelligence.

Harter says that court rulings around AI are evolving rapidly, like the technology itself, and will continue to do so in the coming years.

He is also interested in using AI in his own firm, which he founded in 2022, to make the practice of law as efficient as possible.

Harter spoke with LNP | LancasterOnline about the possibilities, concerns and legal issues he is seeing around AI as it grows in popularity. The interview has been edited for length and clarity.

Is AI a growing part of your firm’s work?

AI is one of those technologies like email and text messaging that is going to touch almost every industry. It’s just a question of how it does that. People have the impression that certain jobs will be replaced and everybody else will be fine. But whether it’s generating images, text, or summarizing materials, it’s really going to impact almost every job you can think of.

Are there a lot of businesses using AI locally, and in what applications? How are they benefiting from it? 

A lot of companies are using AI to draft blog posts, marketing pieces and correspondence. A lot of times it is some young and hungry people at the bottom of the company.

Companies looking to implement it in their systems overall are the minority, but they exist – in health care, summarizing medical records; banking, where it’s part of the decision process; law firms summarizing documents.

There are a lot of creative applications. A company I work for has fed an existing set of blog posts into a ‘chatbot’ on their website. Instead of using chatGPT and answers from the entire internet, it only answers from their own content.

It makes for better automated customer service. It avoids hallucination, which is when the AI makes something up if it doesn’t know the answer.

AI systems are ‘cocktail party smart.’ They can talk like a doctor or lawyer and mimic what a good response would be, whether or not the content is perfect. There was a recent high profile case where a lawyer was sanctioned because he used AI to write a brief, and it cited court cases that don’t exist. You’ve got to be careful.

What potential liabilities are you issuing warnings about to companies considering using AI?

The most basic thing is to read and fact check the content that AI generates.

The second thing is thinking about where it got the information it is summarizing. There are industry-specific AI tools, medical ones that pull from medical journals to make diagnoses instead of looking at Reddit.

The last one is, companies need to be careful from an intellectual property standpoint. There are serious questions about whether AI-generated content can be copyrighted. It’s not unique to you. A human didn’t create that content. You want to be careful about relying on it too much for generating logos and things like that. If it’s really important to you, maybe don’t let the AI do all of the work.

My guess is that courts will say – if you make a prompt and it creates something that has never been seen before, you’re probably going to have legal rights to it. And if you tell it to, for example, write a song with lyrics like Snoop Dogg, you won’t.

AI is being used a lot in hiring to screen applicants and resumes. What are the benefits of that, and what are the potential issues with bias and discrimination?

A lot of companies, including local ones, are using AI to filter through resumes, especially when they use online platforms which generate a lot of applicants. It’s useful for large, complicated organizations, as well as the small bakery that doesn’t have time to look through 100 resumes every time they hire.

They are very useful, but you have to be concerned that they’re not filtering out certain groups of people. For example, if you feed an AI the last 100 years of employment data, there are going to be past decisions that are not desirable in the modern context – that were based on race, based on gender. Programmers may not be trying to eliminate someone because of the color of their skin, but if the data set comes in that way, AI will mimic what it is taught.

What are the privacy concerns around using AI for businesses?

You need to be careful about what is being transmitted out. Are you feeding it information about marketing, pricing information, or customer communications? What is the AI allowed to do with that information? Are they allowed to sell it to other people?

AI works best when it has more to learn from, so they tend to want to grab as much information as possible. It’s easy to leave the door open, allowing it to grab more than you might want.

What are the laws around data privacy in Pennsylvania? Are they more or less strict than other states?

Pennsylvania 100% needs a more updated set of regulations about data privacy, and AI in particular. It does require companies to notify customers if data is stolen, but only when a specific combination of information is involved – like a bank account and a full name. The protection also stops when the company notifies the customer. There’s no punishment, no sanction. That’s not truly protecting anybody’s privacy.

What are the ways that hackers can use AI that people and businesses should be aware of?

There is the impression that a hacker is one person, but it is actually an entire industry with multiple levels of sophistication. There are people who write code, and other people who collect data. In between there is the real criminal layer, which buys the software, buys a customer list from the dark web, and automates it to attack a large number of people quickly.

AI is really good at taking large pieces of information – such as Social Security numbers and addresses – and summarizing it in a way that’s dangerous.

Are companies using AI for surveillance and productivity tracking, and what are the issues there?

At the end of the day, the danger of AI is its power. Humans can pull together information and build a profile, but the difference is the amount of information that can be reviewed, and the speed. One example is police surveillance. Most people are OK with police following a car if they suspect something criminal is going on. They’re OK with traffic camera footage being assembled to investigate a crash. But what if that information is fed into an AI program, and used to track someone in real time using facial recognition? That’s where AI gives sort of a superpower that makes people nervous, and should make people nervous.

AI is making big waves in education. Are there reliable AI detection tools out there, and could their use by schools or businesses create liability issues?

It’s very challenging in high school and college, where you’re writing more complex papers. Direct plagiarism is easy to detect. AI generated text is much more complicated. If a student starts with AI, and finishes the paper themselves, could that be OK? In a workplace application it is. Of course, teachers may give specific instructions to not use generative AI, because they want their students to prove they can do the work without that shortcut.

In general, what are your feelings about the recent advances and the growing potential for AI with chatbots like ChatGPT?

Generative AI is going to be a leap forward in efficiency, and do something meaningful with the massive flood of information we are given on a daily basis. It can free us up to do other things, and help us as a society moving forward. There are definitely going to be growing pains, like with any technology, but it’s a really exciting technology to improve the way that we work, play and live our lives.

Support for WITF is provided by:

Become a WITF sponsor today »

Support for WITF is provided by:

Become a WITF sponsor today »

Up Next
Politics & Policy

To boost Pa.’s election power, lawmaker hopes to change 2028 primary date