When to Use AI at Work (and When Not)

This post is part of Lifehacker’s “Living with AI” series. We explore the current state of AI, what it can do (and what it can’t do), and assess where this revolutionary technology will go next. Read more here .

Almost immediately after the launch of ChatGPT in late 2022, the world started talking about how and when to use it. Is it ethical to use generative AI at work? This is a lie”? Or are we simply witnessing the next big technological innovation that everyone will either have to embrace or get behind and drag their feet on?

AI is now part of the job, whether you like it or not

AI, like everything else, is first and foremost a tool, and tools help us do more than we can do on our own. (My job would literally be impossible without my computer.) In this regard, there is theoretically nothing wrong with using AI to improve productivity. In fact, some work apps are fully AI-enabled. Just look at Microsoft: the company has essentially conquered the term “computer at work” and is adding artificial intelligence capabilities directly to its products.

Since last year, the entire Microsoft 365 suite, including Word, PowerPoint, Excel, Teams and others, uses Copilot , the company’s artificial intelligence support tool. Think of it as the Clippy of yesteryear, only now it’s much more useful. In Teams, you can ask a bot to summarize your meeting notes; in Word, you can ask the AI ​​to create a working proposal based on your list, and then ask it to clarify specific paragraphs you don’t like; in Excel, you can ask Copilot to analyze and model your data; in PowerPoint, you can ask someone to create an entire slideshow for you based on a prompt.

These tools don’t just exist: they’re actively created by the companies that make our work products, and their use is encouraged. This reminds me of how Microsoft advertised Excel itself back in 1990: The advertisement portrayed spreadsheets as time-consuming, rigid, and lacking in features, but with Excel you can create a working presentation while riding an elevator. We don’t see this as “cheating” work: it’s work.

Relying intelligently on AI is the same thing: just as Excel in the 1990s extrapolates data into cells you didn’t create yourself, Excel in 2023 will answer your questions about your data and carry out the commands you give it. ordinary language, not formulas and functions. This is a tool.

What kind of work should you not use AI for?

Of course, there is still an ethical line you may cross here. Tools can be used to improve operations, but they can also be used to commit fraud. If you use the Internet to hire someone else to do your job and then pass that job off as your own, that is not using a tool to improve your job. This is not true. If you just ask Copilot or ChatGPT to completely do your job for you, same thing.

You also need to consider your own company’s guidelines when it comes to artificial intelligence and the use of third-party technologies. Your organization may have already established these rules, given the prominence of AI over the past year and a half or so: Perhaps your company gives you the green light to use AI tools within reason. If yes, then great! But if your company decides that you cannot use AI for any work purposes, you can log out of ChatGPT during business hours.

But let’s be honest: your company probably won’t know if you’re using AI tools, as long as you’re using them responsibly. The bigger issue here is privacy and confidentiality , something that few people think about when using AI in general.

In short, generative AI tools work because they are trained on huge data sets. But AI is far from perfect, and the more data the system has to work with, the more it can improve. You train the AI ​​systems with every hint you give them, unless the service specifically allows you to opt out of that training. When you ask Copilot for help writing an email, it takes into account the entire exchange of information, from how you responded to its responses to the content of the email itself.

So, it is a good rule of thumb : never give confidential or sensitive information to an AI . The easiest way to avoid problems is to treat AI the same way you would with email: only share information using something like ChatGPT, as you would be comfortable sending an email to a colleague. After all, your emails may well someday become public knowledge: would you agree if the world saw what you said? If yes, then you can share them with the AI. If not, keep him away from robots.

If the service offers you a choice, refuse this training. However, your interactions with the AI ​​will not be used to improve the service, and your previous chats will most likely be deleted from the servers after a certain period of time. However, always refrain from sharing personal or company data with an AI chatbot: if the developer stores more data than we think, and it will ever be hacked, you may be putting your work data in an untrusted place.

More…

When to Use AI at Work (and When Not)

This post is part of Lifehacker’s “Living with AI” series. We explore the current state of AI, what it can do (and what it can’t do), and assess where this revolutionary technology will go next. Read more here .

Almost immediately after the launch of ChatGPT in late 2022, the world started talking about how and when to use it. Is it ethical to use generative AI at work? This is a lie”? Or are we simply witnessing the next big technological innovation that everyone will either have to embrace or get behind and drag their feet on?

AI is now part of the job, whether you like it or not

AI, like everything else, is a tool first, and tools help us do more than we can do on our own. (My work would literally be impossible without my computer.) In this regard, there is theoretically nothing wrong with using AI to improve productivity. In fact, some work apps are fully AI-enabled. Just look at Microsoft: the company has essentially conquered the term “computer at work” and is adding artificial intelligence capabilities directly into its products.

Since last year, the entire Microsoft 365 suite, including Word, PowerPoint, Excel, Teams and others, uses Copilot , the company’s artificial intelligence support tool. Think of it like the Clippy of yesteryear, only now it’s much more useful. In Teams, you can ask a bot to summarize your meeting notes; in Word, you can ask the AI ​​to create a working proposal based on your list, and then ask it to clarify specific paragraphs you don’t like; in Excel, you can ask Copilot to analyze and model your data; in PowerPoint, you can ask someone to create an entire slideshow for you based on a prompt.

These tools don’t just exist: they’re actively created by the companies that make our work products, and their use is encouraged. This reminds me of how Microsoft advertised Excel itself back in 1990: The advertisement portrayed spreadsheets as time-consuming, rigid, and lacking in features, but with Excel you can create a working presentation while riding an elevator. We don’t see this as “cheat” work: it’s work.

Relying intelligently on AI is the same thing: just as Excel in the 1990s extrapolates data into cells you didn’t create yourself, Excel in 2023 will answer your questions about your data and carry out the commands you give it. ordinary language, not formulas and functions. This is a tool.

What kind of work should you not use AI for?

Of course, there is still an ethical line you may cross here. Tools can be used to improve operations, but they can also be used to commit fraud. If you use the Internet to hire someone to do your job and then claim that job as your own, that doesn’t mean you’re using the tool to improve your job. This is not true. If you just ask Copilot or ChatGPT to completely do your job for you, the same thing will happen.

You also need to consider your own company’s guidelines when it comes to artificial intelligence and the use of third-party technologies. Your organization may have already established these rules, given the prominence of AI over the past year and a half or so: Perhaps your company gives you the green light to use AI tools within reason. If yes, then great! But if your company decides that you cannot use AI for any work purposes, you can log out of ChatGPT during business hours.

But let’s be honest: your company probably won’t know if you’re using AI tools, as long as you’re using them responsibly. The bigger issue here is privacy and confidentiality , something that few people think about when using AI in general.

In short, generative AI tools work because they are trained on huge data sets. But AI is far from perfect, and the more data the system has to work with, the more it can improve. You train the AI ​​systems with every hint you give them, unless the service specifically allows you to opt out of that training. When you ask Copilot for help writing an email, it takes into account the entire exchange of information, from how you responded to its responses to the content of the email itself.

So, it is a good rule of thumb : never give confidential or sensitive information to an AI . The easiest way to avoid problems is to treat AI the same way you would with email: only share information using something like ChatGPT, you’ll be comfortable sending an email to a colleague. After all, your emails may well someday become public knowledge: would you agree if the world saw what you said? If yes, then you can share them with the AI. If not, keep him away from robots.

If the service offers you a choice, refuse this training. However, your interactions with the AI ​​will not be used to improve the service, and your previous chats will most likely be deleted from the servers after a certain period of time. However, always refrain from sharing personal or company data with an AI chatbot: if the developer stores more data than we think and it will ever be hacked, you could be putting your work data in an untrusted place.

More…

Leave a Reply