AI is everywhere.
From the way we study to how we shop, write, work, eat, and even grieve, Artificial Intelligence has slowly, then all at once, seeped into our everyday lives. You’ve probably seen it: that AI-generated voiceover in a TikTok video, that chatbot suggesting what to wear, or even the eerie way your phone already knows what you’re about to type.
And I get it—AI can be really helpful.
As someone in school studying legal stuff, who wouldn’t want help planning their schedule, proofreading essays, or translating complex legal jargon into plain English? AI makes lots of things easier, especially for people with disabilities or anyone who just needs an extra hand.
In creative spaces, AI has even filled gaps we never thought possible. Think of movie sets where actors can’t make it—AI steps in. Or writing sessions where you’re stuck staring at a blinking cursor for hours, one prompt and suddenly the floodgates open. From grief therapy, where AI is used to recreate the voice or image of a lost loved one (ethically debatable, I know), to content creation, AI is touching almost every part of our lives.
But with all these shiny new capabilities, it’s fair to ask: Are we being helped or hijacked?
Where does human creativity end and machine-generated content begin? What happens when AI starts “assisting” so much that it begins to erase the very essence of personal expression?
There’s also the growing number of students who rely so heavily on AI that they can barely construct a proper sentence without it. I’ve seen it—essays that sound polished on paper but can’t be defended in class. Group work where someone’s writing style changes completely overnight. Or worse, students who freeze during presentations because they’ve gotten so used to outsourcing their thoughts that they don’t know how to articulate them on their own. It’s one thing to use AI as a tool to enhance your ideas, but when it becomes a crutch, that’s a different story. We lose something when we stop thinking for ourselves, our voice, clarity, and our ability to grow through struggle.
As a writer, this gets personal. The things I grew up learning and loving about literature through reading—intentional punctuation, long em-dashes, and lyrical phrasing—are now considered red flags by AI detection software. And in school, especially if you’re studying the social sciences or any writing-heavy field, this hits even harder. Imagine pouring your heart into a 10-page paper, doing all the research, crafting every sentence with care, only to get a poor grade because someone thinks AI wrote it. It’s frustrating. I use my voice. I write like I always have. And then I’m told: “This seems AI-generated.” That’s wild. Especially when the whole point of writing is to express you, not a tool, not an algorithm. You.
And outside the classroom? It’s no easier. As a creative writer, or even just someone who writes online, there’s now this pressure to prove that your words are yours. People are quick to assume that if something is well-written, it must have been AI. It makes you second-guess your style, like maybe you should “simplify” your tone or avoid sounding too polished, just to seem more human. That’s a weird place to be as a writer. Suddenly, being articulate feels suspicious. And for those of us who’ve spent years building a voice, who find healing, identity, and purpose in our writing—it feels like the ground under us is no longer solid. Writing used to be about freedom. Now, it sometimes feels like walking on eggshells, hoping your originality won’t be mistaken for something machine-made.
Then there’s the legal and ethical angle—one I can’t ignore. As powerful as it is, AI lives in a legal grey zone. Most laws haven’t caught up with how fast the technology is evolving, which creates a lot of room for harm, exploitation, and confusion. One major issue? Accountability.
AI-generated images can now place my face—or your face—on someone else’s body. Without consent. Without context. Deepfakes blur the lines between real and fake in terrifying ways. Artists are losing credit. Writers are being replaced. Personal data is being harvested, analysed, and used... and most of us don’t even realise it. It’s not just misleading—it’s dangerous. Imagine someone fabricating a political speech, a crime scene confession, or a compromising video using someone else's face. Legally, this crosses lines around identity theft, defamation, and fraud—but the enforcement? That’s still shaky.
Let’s say an AI tool generates false or defamatory content about someone. Who’s legally responsible? The person who used the tool? The developers who built the model? The platform that published it? We don’t have clear answers yet. That lack of legal structure can leave victims with no path to justice and creators with no clear boundaries.
Another big legal challenge is around copyright and intellectual property. AI tools are often trained on existing works—books, photos, songs, blog posts—without asking creators for permission. This raises serious concerns about creative ownership. If an AI is trained on a hundred writers’ styles and then mimics one of them in a piece it generates, can the original writer claim infringement? Technically, the AI isn’t “copying”—but it’s also not completely original. These legal loopholes threaten to undermine the value of human creativity and rob artists of recognition (and income) for their influence.
And we haven't even scratched the surface of surveillance. Who’s watching the watchers when AI is embedded in law enforcement, facial recognition, and even courtroom decisions? These aren’t just future problems—they're happening now. It’s giving Black Mirror. It’s giving “we should be concerned.”
From a legal standpoint, we’re trying to tame a wild horse with outdated reins.
And the law needs to catch up, fast.
So, like any powerful tool, it’s about the way you use it. Think of it like a calculator in math class: it helps speed things up, but it doesn’t replace your understanding of the subject. And as much as I am against a lot of the ways AI can be used as someone who enjoys writing, there are so many ways to use this piece of aid without it completely hijacking your sense of self and original thought.
AI is a creative partner, not a replacement: use it to generate ideas, help you with structure, or fix your grammar or punctuation, not to change your entire sense of self. Customise the output. You are the curator. AI is just an assistant. Inject your personality, your style—things that make it you.
Be transparent, ask yourself why, and set boundaries: If this is your source of income, as a public writer or in academia, be honest about it, especially if AI was just an aid. There’s a difference between grammar clarity, punctuation checks, and completely outsourcing your entire voice. Ask yourself: Is this helping me say what I want to say, or does it just make things easier? AI should help you clarify thoughts, not blur your originality. Set more boundaries for yourself to keep that sense of you in everything you do.
But still, I’m torn.
Because I do use AI. A lot of us still do. We ask it questions. We brainstorm with it. We might even write with its help. But here’s the thing: it didn’t replace me. It assisted me. It made the messy parts a little less messy. But the thoughts? The voice? The chaos? All me, All you.
Yes, you can use AI without losing yourself. Just don’t give it the pen. It’s fine to keep it beside you, but not in front of you. Even though AI can mimic your tone, it is not you. It didn’t stay up with you past midnight, pumped up on caffeine, trying to read and summarise for that paper. It didn’t sit with you, racking your brain to get past that writer’s block.
So maybe the question isn’t, “AI, are you for or against us?”
Maybe the better question is:
“How do we stay human in a world that keeps making it easier not to be?”
AI is powerful, but it has no lived experience.
That’s your edge—and it always will be.
Wingin’ It, Still Writin’ Through it! Till Next Time :)
Sources & Further Reading
Here are some of the articles and sources that I read while writing this, especially around the legal, ethical, and creative complexities of AI:
AP News
📌 Industry leaders urge Senate to protect against AI deepfakes with No Fakes Act
Explores legislative efforts to regulate AI-generated deepfakes and voice cloning, especially in the context of consent and identity protection.
🔗 Read it hereThe Guardian
📌 We have a chance to prevent AI decimating Britain's creative industries – but it's slipping away
Argues that AI could undermine creative work in the UK without immediate and thoughtful regulation.
🔗 Read it hereHarvard Business Review
📌 Generative AI Has an Intellectual Property Problem
Breaks down how generative AI uses existing creative work without consent, and why it’s such a big deal for writers, artists, and creators.
🔗 Read it hereThomson Reuters
📌 Deepfakes: Federal and state regulation aims to curb a growing threat
A helpful overview of how lawmakers are trying to keep up with the risks deepfakes pose—from misinformation to identity theft.
🔗 Read it hereNational Conference of State Legislatures (NCSL)
📌 Artificial Intelligence and Law Enforcement: The Federal and State Landscape
Covers how AI is being integrated into policing and surveillance—and why that’s raising red flags, especially around bias and accountability.
🔗 Read it here