
Imagine a student faced with this assignment: “Analyze Starbucks’ international expansion in emerging markets. Consider cultural, economic, and governance factors.” Instead of researching and reflecting, the student copies the entire instruction and pastes it into ChatGPT with a simple “develop this.”
A few minutes later, he receives a perfectly structured document. Well-crafted paragraphs, academic vocabulary, references to international management theories, and conclusions that sound profound. Without reading it thoroughly, he hands it over.
The teacher reads it and finds paragraphs like these:
“Starbucks’ international expansion into emerging markets represents a paradigmatic case of the tension between global standardization and local adaptation. From the perspective of Pankaj Ghemawat’s CAGE model, the company has successfully navigated cultural, administrative, geographical, and economic differences. Its hybrid approach, which combines universal brand elements with contextual adjustments, illustrates the sophistication necessary to succeed in heterogeneous markets.”
It sounds impeccable. Correct academic jargon, legitimate theory cited, flawless structure. But it’s completely interchangeable: it could be about Nike, Coca-Cola, or Inditex without changing a word. It doesn’t mention any specific market. It doesn’t reflect on real contradictions. It doesn’t show any personal research.
What you just read is workslop : content that appears to be well-crafted but completely lacks substance.
‘Trash’ content that looks good
The term, which has recently gained traction in academic and business circles, describes a phenomenon increasingly common in the age of generative artificial intelligence tools. The noun ” slop ” refers , in English, to food that is more liquid than it should be and has an unappetizing appearance, or to a dirty waste liquid. A possible translation into Spanish of this neologism could be “garbage content,” “empty words,” or “low-quality filling”…
It’s not simply plagiarism or verbatim copying. Workslop is more insidious: it’s new content that appears academically sound and passes a cursory reading. In reality, it offers no intellectual value whatsoever because it was never the product of genuine thought. It’s junk disguised as jewelry: perfect in form but empty at its core.
The burden of processing the ‘workslop’
The worklop isn’t immediately apparent because it meets formal standards: appropriate academic jargon, logical structure, and correct citations. But it lacks depth. The conclusions are generic, the arguments superficial, and applicable to multiple contexts. For the reader, processing this content is a waste of time in decoding, evaluation, and feedback. The experience is akin to an academic mirage : it promises knowledge where there is only emptiness.
This type of low-quality, low-quality content creates a deceptive illusion: the appearance of progress, while the reality is that the cognitive load is transferred from the creator to the recipient.
In academic settings, in addition to lost productivity, this type of content erodes trust between professors and students . When someone receives worklop , they’ve not only wasted time decoding the content; they’ve also formed negative judgments about the sender. They wonder: “Why did they send this? Can’t they do their job? Don’t they value my time?”
AI as a problem and as a solution
But the same technology that creates worklops can also help us avoid them. It all depends on how we use artificial intelligence.
One of our recent studies reveals surprising patterns about the factors that influence whether AI returns inane or empty content, or higher-quality content. After analyzing conversations between higher education students and AI chatbots during strategic analysis tasks, we discovered that the way a student communicates with the AI determines the quality of the resulting content .
Students who adopt a relational approach with AI demonstrate deeper critical thinking, thus producing higher-quality academic responses. For example, in response to a chatbot ‘s reply , a student might follow up with, “Interesting, can you explain…?” This more “relational” style and tone is achieved through follow-up questions and displays of cognitive curiosity. This allows students to delve deeper into the presented case.
Conversely, those who used a neutral tone and asked passive questions showed less cognitive engagement with the task. In other words, when students interact with AI relying on expected value, emulating a genuine conversation, the resulting content reflects that more sophisticated thinking. When they simply send cold instructions and wait for responses, we get the workslop .
Treat AI as a collaborator
Here are two examples of how to use an artificial intelligence tool to produce an academic paper:
Approach that generates a workflow :
Copy the task statement and ask “develop this” (without any prior reflection on what they really need to analyze or understand).
An approach that avoids the workslop :
“I’ve read that Starbucks emphasizes local adaptation. Does that contradict its global positioning as a “premium” brand? How do they address this in Asia? And if they do it this way in Asia, why don’t they apply the same strategy in Latin America?”
In the second case, the student is creating a genuine dialogue, asking questions, and seeking logical consistency. The AI, in turn, provides more insightful answers because it is being asked to do so reflectively. The result is academic content that reflects genuine critical thinking, not well-constructed sentences lacking useful or high-quality content.
Critical thinking is essential
AI doesn’t necessarily create worklop . It does when we use it without genuine critical thinking. When we use it to deepen reasoning, maintain authentic dialogue, and seek real alignment between intellectual intentions and responses, AI becomes an amplifier of thought, not a substitute.
From the results of our study, three recommendations for students emerge: use AI as a thinking partner, not a substitute; pay attention to the emotional tone used in communication; and validate the generated text and its value contribution by answering the question: “Does the text add anything new or simply rearrange the obvious?” If it’s the latter, you may be generating worklop .
The challenge isn’t technology. It’s our willingness to use it in a genuinely thoughtful way. The real question every student or researcher should ask themselves isn’t “Can the tool do this?” but “Am I using this to actually improve my thinking?” The difference lies with us.
Author Bio: María Isabel Labrado Antolin is Assistant Professor of Business Organization at the Complutense University of Madrid