Ethical Considerations and Tips for Using LLMs

Last semester, I wrote an article titled, “The ‘Right’ Way To Use LLMs” for our Fall edition of PR Success. In it, I shared my own personal tips and anecdotes about how to use large language models (LLMs) to effectively write high-quality copy.

However, my perspective has changed since then. I’ve become more aware of concerns regarding plagiarism and environmental impact, and I felt it would be remiss of me to have a body of work out there that didn’t represent how I feel. Plus, I’ve also consolidated my discoveries on how to use LLMs a bit better.

So, the following is my current framework as of February of 2025 of how to use LLMs and some important things to consider about them.

How LLMs Are Developed

LLMs pull data from a wide range of public sources: books, articles, websites and social media. This broad coverage makes them versatile, but it also comes with risks. Sensitive or copyrighted material may slip into their training data, which can lead to confidentiality issues. For instance, if you provide confidential client details from your own projects to a public LLM, you risk exposing that information.

To address this, many organizations now rely on proprietary LLMs trained only on their own data. Some speakers at PRSSA have even shared examples from their companies. These models help keep information in-house without losing functionality. For communicators, this means pushing for policies that protect confidential work. Always ask whether a platform is safe for sensitive data. If you’re unsure, stick to your own writing or use approved internal AI tools.

However, there’s still the issue of plagiarism, which there isn’t such a clear-cut answer to. When you use LLMs, you risk incorporating someone else’s material into your own. These models may repeat exact phrases from existing works more often than you expect. That’s why it’s best not to rely on them entirely. Instead, treat them as a starting point (we’ll talk more about that later).

Environmental Impact

The usage of LLMs and Generative AI as a whole is detrimental to the environment. Data centers, which are essential for AI operations, account for approximately 2% of global electricity consumption. This substantial energy usage contributes to increased carbon emissions and places additional demand on water resources for cooling purposes.

Recent developments have introduced more energy-efficient LLMs. Notably, DeepSeek, a Chinese AI startup, has released their R1 model, which is free, open-source and designed for efficiency. The R1 model employs a “mixture of experts” architecture, activating only relevant networks for specific tasks. This significantly reduces computational and energy requirements. This design allows the model to operate on less sophisticated hardware, too, so running it and other LLMs locally will become more feasible as consumer-grade computing power advances.

Despite these advancements, the existing AI infrastructure continues to consume significant energy. In 2024, companies like Google reported a 48% increase in greenhouse gas emissions, partly attributed to AI activities. Ideally, more regulations on it would be imposed, but that doesn’t seem likely based on the interest of the current US administration.

So, where does that leave you? It’s a similar issue to the meat packing and agricultural industries: their detrimental impact on the environment doesn’t seem to be halting based on the demand of consumers. It’s reasonable to assume that your participation alone won’t stop large corporate data centers from running at full capacity. Just like how having a hamburger won’t stop the meat industry’s carbon emissions.

Arguments can be made for/against the personal usage of generative AI based on this premise, but one thing is clear: it would be very hard to get the majority of the planet/legislators/corporations in the world on the same page. At the very least, it’s something you should be aware of as you run that next prompt or enter in a mundane search. I would say “just Google any questions you have,” but now Google and other search engines automatically incorporate LLM responses.

Optimizing Your Workflow with LLMs

Finally, the moment some of you have been waiting for: the useful stuff. The tips and tricks I’ve learned through experimenting with LLMs. It’s possible to good copy from them without losing your sanity, but let’s be clear: you’ll still need critical-thinking skills, along with your own writing and research abilities. But yes, you can make your workflow quicker and more efficient with these tools.

First, start with some ground rules. Before asking an LLM to write anything, tell it to avoid lazy verbs like “emphasizes, highlights, underscores, stresses,” and “demonstrates.” These pop up constantly in AI writing, and they’ve made me adverse to them in my own human writing.

 Another habit LLMs have? Cranking out comparison statements like “It’s not this, it’s this,” “whether it’s this or this,” or “from this to this.” Keep those to a minimum; AI writing is wrought with them. Also, LLMs might get overwhelmed by your prompts, misinterpreting or ignoring rules you’ve set. Sometimes, you have to be a little sassy and remind them of your instructions multiple times.

Copy that comes straight from an LLM won’t sound human. You should reword things in a way you’d naturally say them. This is non-negotiable. Also, LLMs love dependent clauses, which you should minimize in your own writing anyway (that was an example of a dependent clause hehe). Telling the AI to omit these isn’t a bad idea.

Use LLMs as a starting point. The quality they produce ranges from mediocre to decent, with occasional flashes of greatness. If you see something an LLM writes and don’t like it, think about how you would say it better. One of my favorite methods? Have the LLM create an outline for whatever you’re writing. This saves time while letting you flesh out ideas with your own organic language.

I’ve found that instructing the LLM to use simple language, write concisely and avoid passive voice dramatically improves their output. For specialized topics, ask it to “act as an expert in [specific field].” This often sharpens the content a bit further. You can experiment with popular prompt formats (below is an image with some examples), though I usually spitball at the AI until I get material that’s good enough to tweak.

LLMs can also help sort your ideas. When my thoughts are scattered, I write messy, train-of-thought-style passages without stopping (I love using Squibler’s Most Dangerous Writing App, which deletes your progress if you pause) and then feed that chaos to an LLM to organize. From there, I edit and structure the copy to suit my needs.

Be wary of hallucinations and an LLM’s tendency to ignore your directions. You’ll need to fact-check everything an LLM says, and sometimes repeat instructions like a broken record. Providing your own research improves output quality, but be ruthless about attribution and plagiarism. And remember: never, ever put confidential data into publicly-run LLMs. That will get you fired.

The bottom line: LLMs are tools, not replacements. Use them for outlines, inspiration or untangling messy ideas. You should always inject your voice, polish their output and double-check their claims. Nail that balance, and you’ll save time without sacrificing quality.

Henry Gorsuch is a Journalism Strategic Communications major with a minor in Marketing and can be found on LinkedIn here.

Leave a comment