We believe that AI has the potential to positively and profoundly transform how we live and work. At Parrot AI, we also recognize the significance of maintaining ethical principles in how AI is developed and deployed.
We are committed to building products that respect the humanity of our customers, protects the security of their data and preserves their privacy. We also design our products with the goals of being fair, inclusive and transparent.
Security
Parrot AI has a robust security program that starts with this guiding principle:
| What would a product need to do in order for us to feel comfortable inviting it into our meetings?
You can read about all the things we do to secure your data here.
Privacy
We believe that people should have the right to control their image, ideas and how they are used. We believe this so much that one of the first patents we ever filed was for Redaction - allowing Parrot AI users to permanently erase parts of recordings that they are uncomfortable with, for whatever reason.
We are also committed to keeping your data private. We maintain a robust privacy policy. You own your data. We will never share it with anyone unless legally compelled to do so.
Fairness and Inclusivity
AI is only as fair as the data it is trained on. Biased data can lead to discriminatory outcomes that can result in the unfair treatment of individuals or groups. We monitor for this in our own products and we work with our partners to ensure that our AI is designed, implemented and tested for fairness in order to promote equity, social justice and public trust in our technology.
We also believe that AI shouldn’t be a tool exclusively for the wealthy and giant corporations. With a free plan and low-priced paid plans, Parrot AI is used by tens of thousands of students, non-profits, religious groups, small businesses, community organizers, local government, artists, writers, journalists and teachers.
Transparency
Finally, it’s also critical to be transparent in how AI works and why it behaves the way it does. This is harder than it sounds, since AI is the result of highly complex systems. Whenever possible, we work hard to make this behavior clear, both to increase trust and reliability and to highlight areas where it might be drawing the wrong conclusions.