3. AI Predetermination: AI Bias Could Trigger Discrimination Claims
Some AI systems don’t just automate tasks. Instead, they make decisions. If those decisions impact people, they may expose your business to liability under civil rights and employment laws. For example, AI systems used in hiring, lending, housing, or medical services have all been found to produce biased results. In some cases, that bias results in violations of laws such as:
As a rule of thumb, regularly audit your AI systems for biased outcomes, especially in areas that impact people’s access to jobs, housing, or healthcare. Train your models on diverse, vetted data and implement human review at critical decision points to reduce unintentional discrimination. Remember, bias in AI can be difficult to detect until harm occurs.
4. AI Liability: You May Be Liable for What AI Says or Does
If your chatbot gives medical advice, your automated assistant makes misleading promises, or your AI-generated email campaign violates marketing laws, regulators may treat it no differently than if you did it yourself. This is where the legal doctrine of vicarious liability comes into play. If your AI acts on your business’s behalf and causes harm, courts may hold you liable, especially if you failed to supervise its outputs.
Common scenarios include:
Chatbots making false or deceptive claims about your services
AI-generated customer responses that violate industry regulations
Email or SMS campaigns that breach spam or consent laws
Some states, such as California and New York, have already begun crafting legislation that assigns responsibility for harm caused by autonomous systems back to the businesses that use them.