If you’re using ChatGPT to write up reports, translations, research, social posts, client emails, and even internal emails, be careful what information you put into the query before you press SEND. If you put proprietary information into ChatGPT (or other AI tools), there’s a risk that that data then becomes available to the general public, meaning you could be at risk of losing your IP.
53% of respondents to the recent Cisco 2024 Data Privacy Benchmark Study* said they have used internal process details to shape their queries to ChatGPT and other AI tools. More worrying, 33% of respondents said they had used employee data, non-public company material and customer information to create a ChatGPT query. Imagine unsuspectingly putting in patents, coding, and product roadmaps to generate a report or text, and then suddenly… it’s out there!
If 69% of respondents to the same survey say they have concerns that the use of AI tools could hurt their organisation’s legal and intellectual property rights, why then are we being so careless with what information we feed into these systems… and what can we do about it?
ALWAYS consider data privacy and security before using an AI tool
The foremost thought you should have before you type up a ChatGPT query is should I be sharing this outside of my organisation and does it align with the Australian Privacy Principles (APPs)? If it is likely to breach a privacy rule or if you don’t normally share that piece of data with anyone outside your organisation, then don’t type it into the AI tool.
There a lot of checks you can do on the AI tool you are using to see if it has clear policies regarding data privacy and security, and that it complies with the APPs. Look for AI tool providers that are transparent about their data practices, including how they collect, use, and store data. Is it stored in Australia or does it get transferred internationally? How do they protect against data breaches? And make sure that you retain ownership of the data you are inputting into the AI tool.
If you do have to put in potentially sensitive information to an AI tool, firstly consider whether it’s absolutely necessary. Then, if you can, try to de-identify and anonymise the information to avoid any privacy issues.
And remember that large language AI models like ChatGPT, can sometimes reflect biases present in the data they were trained on. They are only as good as they information they are fed and can access. Be cautious of any biases in the outputs of the AI tool and consider how they might impact privacy or fairness.
If you’d like to know more about how AI tools could benefit your business, talk to us at A7.
*Source: ACS - Information Age - ICT News - 1/2/24