There are numerous ethical considerations when using AI tools and generating new content. Who owns the output, especially if it's heavily influenced by or directly copied from copyrighted material? How do we think about human bias when it comes to the data analyzed by large language models (LLMs)?
As web practitioners, it's critical that we build new technology thoughtfully and responsibly. There are numerous efforts across the world to answer these questions (and more). We cannot cover every concern, but we can open a dialogue for how you think about ethics when using AI.
Here are some key areas to consider when using and building with AI tools:
- Content ownership and copyright. Copyright is a legal protection for original works of authorship. The law differs from country-to-country, and many countries are debating what happens with content generated by AI. Whenever you publish content, you should know the answer to the following question: Are you infringing on someone else's copyrighted content? This can be harder than you may expect to answer!
- Bias and discrimination. Computers and algorithms are built by humans, trained on data that may be collected by humans, and thus are subject to human bias and harmful stereotypes. This directly impacts the output.
- Privacy and security. This is important for all web sites and web applications, but especially when there is sensitive or personally identifiable information. Exposing user data to more third-parties with cloud APIs is a concern. It's important that any data transmission is secure and continuously monitored.
Google's AI principles
We are committed to developing technology responsibly and establishing specific areas of AI we won't pursue. In fact, Google has committed to several AI principles, with a central team focused on governance and implementation.
In short, our objectives for AI applications are as follows:
- Bold innovation. We develop AI that assists, empowers, and inspires people in almost every field of human endeavor; drives economic progress; and improves lives, enables scientific breakthroughs, and helps address humanity's biggest challenges.
- Responsible development and deployment. Because we understand that AI, as a still-emerging transformative technology, poses evolving complexities and risks, we pursue AI responsibly throughout the AI development and deployment lifecycle, from design to testing to deployment to iteration, learning as AI advances and uses evolve.
- Collaborative progress, together. We make tools that empower others to harness AI for individual and collective benefit.
While we as web developers may not always be responsible for creating the models and collecting the datasets which train AI tools, we are responsible for what tools we use and what end-products we create with AI.
Organizations across the web thinking about ethics
There are a number of nonprofits, non-governmental organizations (NGOs), and other companies that have centered their work and research into creating ethical AI.
Here are just a few examples.
- ForHumanity
- Center for AI and Digital Policy
- W3C working group
- Authorship and AI tools | COPE: Committee on Publication Ethics
There's much more work to do in this space, and many more considerations yet to discover. We intend to be intentional with ethical considerations for every piece of content we generate.