In a World of Disruption, How Can Humanities Workers Better Use AI?

By: blockbeats|2026/03/05 18:00:01
0
Share
copy
Original Title: "AI Guide for Humanities Workers"
Original Author: Han Yang MASTERPA, Founder of Funes

Humanities workers have not created world-changing events, but they endure world-changing events.

Sometimes I feel that those who sell AI tutorials always treat AI as a kind of magic: give you a magical prompt, and you can do anything. Of course, reality is not like that. Over the past period of time, because we founded FUNES, we have had to rely heavily on AI for production every day. In addition to "The World of Fuli," my own writing, and other content production, relying solely on manpower is not enough. So we have made a lot of attempts to use AI to assist our content marketing and humanities research work.

Later, when a new colleague joined the company, I made a simple Keynote. After Teacher Jia Xingjia heard about it, he invited me to give a presentation. My partner and I named this presentation "AI Guide for Humanities Workers." At the time, it was a purely private sharing, mainly focusing on some general principles. It was later expanded through several iterations.

Over the past year, I have shared this experience of how to use AI with many friends who create content, do research, and develop knowledge products. Its goal is not to teach you a few magical keywords to memorize, nor is it to treat AI as a panacea; instead, it is more like a set of working methods: allowing you to truly integrate large models into your writing, research, editing, topic selection, material organization, and production processes without writing code, and ensuring traceability, supervision, verification, so that you are still willing to sign your name on the work in the end.

This method comes from the pitfalls we encountered in real projects: when content enters large-scale production, relying solely on manpower will collapse; and directly letting AI write an article will lead to illusions, laziness, and AI-sounding writing. So we had to turn creation into a production line and the production line into an iterative system.

Today, I don't want to give you various keywords directly; I hope to give you some key guiding ideologies and principles.

Before the Principles: Three Baselines of This Guide

Before diving into specific methods, clarify three baselines. They determine how you "use AI" and why you "use it this way."

1. The process must be traceable, supervisable, verifiable
You cannot only focus on the result and ignore the process. For humanities work, the black box is the most dangerous: illusions, misdirection, concept switching will silently occur in the black box.

2. Must Be Manipulable
You should be able to control how it is done, by what standards, where to slow down, where to tighten up. You are not "drawing a card"; you are in production.

3. You Still Want to Sign Off in the End
“Am I willing to put my name on it?” is the final quality check. If you are not willing to sign off, it is usually not an ethical issue, but a lack of your will throughout the process — which also means the quality is uncontrollable.

Principle 0: Don't Make Wishes to AI; Treat It as a Workbench

Many people’s use of AI is essentially making wishes:
“Give me a good paragraph,” “Help me write a good article,” “Explain this paper.”

The problem is — “explanation” itself has countless interpretations: for laymen, for undergraduates, for postgraduates, for peers, is not at all the same task. AI cannot default to know your background, purpose, taste, and standards. If you don't make it clear, it can only give you the most effortless answer in the "average human" default way.

Treating a large model as a workbench means: you don't demand results from it; instead, you mobilize its tools to complete a process. What you need to do is to clarify the task, clarify the standards, and lay out the steps.

For example, having AI explain a paper

You can transform a wishful request (explain this paper to me) into a workbench-style task like this:

· Clearly define the target audience: intelligent, curious, but not domain expert graduate students

· Clearly define the way of explanation: heuristic, step-by-step, with academic rigor

· Clearly define structural requirements: start with significance, then provide background, then reconstruct the research process, then explain key technical points, then offer insights

· Clearly define tone: respect intellect, not condescending, not assuming the other party already has a deep foundation

You will find: the more you give like "assignment requirements," the less like AI it becomes, and the more like a real teaching assistant.

Principle 1: To Want AI to Do Well, Reflect on Yourself First — You Are the Responsible Party

If you hired a secretary, you wouldn’t just say:
“Fix up that article by Han Yang on the American Rust Belt.”

You would definitely add:

Why this article, who it's for, current roadblock, the problem you want it to solve, non-negotiables, preferred style, key metrics you care about.

AI is the same. You have to treat it like a very diligent, very polite colleague who doesn't understand your tacit assumptions. Real "Prompt Engineering" is not a trick, but a sense of responsibility: any task is still yours to do, AI is just helping you do it.

When you are dissatisfied with AI's output, the most effective first response is not "AI is not good," but:

· Did I clearly explain the "object/audience/purpose"?

· Did I provide enough background information and constraints?

· Did I break down "abstract desires" into "executable actions"?

· Did I provide criteria for judgment?

-- Price

--

Principle 2: Ask at least 3 models the same question — each AI has a "personality" and expertise.

In our company, for anyone having their first encounter with a large model, I would like them to, during the initial stages of use, ask three different AIs the same question. AIs, like humans, are different: some are better at wording in writing, some are better at reasoning and problem-solving, some are better at code or tool invocation. A more realistic point is that models of the same product, or new versions of the same model, will continuously refine their "style" and "boundaries."

So, a very simple but extremely effective habit is: for the same question, throw it at least 3 different AIs, and you will quickly get a "feel":

· Which one is better at writing, which one is better at thinking, which one is better at looking up information, which one is more likely to be lazy

· Which tasks are suitable for who to do the "first draft," which are suitable for who to be the "reviewer"

· Which one is more suitable for providing "topics/structure," which one is more suitable for providing "paragraphs/sentences"

The value of this step is not in "selecting the strongest model," but in: you start managing models like managing a team, rather than treating it as the sole oracle.

Principle 3: AI is not omniscient — treat it as having the common sense knowledge of a "good undergraduate student."

A very practical expectation management is:
AI's common sense ≈ that of a top-tier undergraduate student.

If there is something you think, "even an excellent undergraduate student may not know," then you should assume that AI also does not know; at least assume it will "pretend to know well" when it doesn’t.

This will lead to two direct actions:

1. Anything beyond common sense, you have to teach it
For example: if you want it to write jokes, write copy with a truly unique taste, or write highly professional arguments—you can't just say "write better," you have to provide examples, standards, boundaries, and language materials. I believe that if you were to explain to a friend right now what good writing means to you, it would take some time; so how can you assume that AI automatically knows?

2. You have to treat it as a collaborating intern, not as a god
It can do a lot of "micro-interpolation" work: completing the scaffolding you provide, turning the materials you provide into readable text. However, the "scaffolding" and "direction" still come from you.

Principle 4: Let AI approach the goal step by step—White box step-by-step is more reliable than black box all at once

The advantage of AI is not to "give you the correct answer directly," but that it can reliably complete many small steps within the process you design. The more you demand "one-step solutions," the more likely it is to become a "seemingly complete yet lazy" black box.

A particularly intuitive example is doing TTS (Text-to-Speech) or processing a reading script. Instead of saying "pay attention to homophones, don't misread," it's better to break the task into a series of steps, such as:

· Mark pauses/emphasis/speed changes

· Identify potential homophones

· Verify based on dictionary or authoritative pronunciation (if necessary, check first and then confirm)

· Pre-annotate commonly misread words

· If all else fails, use homophones with unambiguous meanings to eliminate misreading at its root

This kind of "obviously correct practice" is something humans would assume they do automatically; but AI does not assume. If you don't write the "obvious" into the process, it will make mistakes on the most effortless path.

Principle 5: Industrialize First, Then AIize—You can't leap from the agricultural age straight into the AI age

If your writing/research process itself is random, inspiration-based, and data-disorganized, then it is indeed challenging to hand it over to AI. Because AI can only catch the part that is "describable and reproducible" by you.

A more practical path is:

1. First turn the work into a "production line": divisible, reusable, and quality-checked

2. Delegate the Substeps to AI: Let It Be a Worker, Not a Deity

We did a dumb but crucial job: figured out the process of how I myself write a non-fiction article, including:

· Why start with this story

· Why choose this sentence

· How to rate examples

· How to structure, transition, and conclude

· How to connect a small story to a larger picture

Finally broken down into dozens of steps, assigning each step to a different AI. The result was:
The models did not suddenly become stronger; it was the process that strung together its "able to do only a little at a time" ability.

When you can clearly describe "how my article was created," you will find that the determining factor of quality has never been "which large model to use," but whether you have explained the workflow clearly.

In a World of Disruption, How Can Humanities Workers Better Use AI?

Some steps taken during the test at that time

However, this strongly recommends you listen to the show for more details.

Principle 6: Anticipate AI Will Slack Off—It Will Save Computing Power, You Need to Clear the "Format Barrier" for It

AI will slack off, and it is "systemic slacking": it will avoid opening a webpage, reading a PDF, or skipping anything it can. It's not that it's bad; it's just that under the constraints of computing power and time, it naturally tends to take the path of least resistance.

So what you need to do is: utilize AI's computing power to "comprehend text" rather than waste it on "handling formatting."

Some highly effective changes include:

· Convert materials into plain text/Markdown as much as possible before feeding them to AI

· Copy web content into clean text (remove navigation, ads, footer noise)

· For lengthy materials, first do "fact extraction/structure extraction," then let AI do the writing

· Standardize PDF/EPUB/web content into retrievable TXT before performing subsequent tasks

You will find that many people are resistant to this kind of "manual labor" and feel that "machines should do the dirty work for me." But in human-AI collaboration, the opposite is true—by being willing to do a little mechanical work, the intellectual part of AI becomes sharper and more reliable.

Principle 7: Remember the Finite Context—Try to Compress the Task as Much as Possible Instead of Expecting it to "Magically Expand"

AI has a contextual window, a "memory limit." If you give it twenty thousand words, it may not remember much; if you give it two hundred thousand words, it might only scan the headlines. A vivid analogy is: locking a person in a small room for a day, throwing a two-hundred-thousand-word book at them, and then asking them to recite—how much they can memorize is roughly the amount of information AI can "remember."

Therefore, there is a counterintuitive yet crucial experience:

1. Compression is Much Easier Than Expansion


Compressing 1 million words into 10,000 words is often more reliable than expanding 10,000 words to 1 million.

This directly changes how you make demands on AI:

· Don't expect to get an entire paper with a 100-word prompt

· Instead, feed in the material as much as possible (in batches, through retrieval, or RAG), let it compress the structure, viewpoints, and body text based on sufficient material

Your past process of writing articles, papers, was originally "reading massive material→extracting→organizing→writing" (at least for me). Here with AI, don't suddenly apply a double standard and expect it to grow magically.

Principle 8: Restrain the Urge of "I'll Just Tweak It a Bit"—Change the Production Line, Not the Output

Many adept writers are most likely to fail in front of AI:
AI produces a draft with a score of 59, and you think a few tweaks will make it an 80, so you start making changes; as you keep altering, it turns into a rewrite; after rewriting, you say, "I better do it myself," and then never use AI again.

The solution is not to work harder on "revisions" but to shift the focus upstream:

· Don't aim for AI to directly produce a 100-point piece

· Your goal is to have the production line consistently output 75–80 points

· What you need to do is iterate the process to raise the "average score," rather than striving for "individual pieces" to be perfect

Principle 9: Treat the Production Line as Product Iteration—Reliability Itself is Value

When you have a system that consistently gives you a baseline of 70 points, its value is not in "how much it resembles you," but in:

· Getting a usable draft at close to zero cost

· You can focus your energy on higher-order judgment: topic selection, structure, evidence, taste, and trade-offs

What you want is not an almighty god who can replace you, but a reliable factory: it's not perfect, but it's stable.

Principle 10: Quantity is the Top Priority — Aim for Quantity First, Then Filter

Only letting AI give you one version will usually get you the most mediocre, most conservative, most "average" one. You must use "quantity" to combat "mediocrity".

A more effective approach is:

· Summary: Ask for 5 versions at once

· Beginning: Ask for 5 beginnings at once, do an AB Test

· Topic Selection: Ask for 50 topics at once, then group them, then select

· Structure: Ask for 3 sets of structures at once, then combine

· Phrasing: Ask for 10 different phrases at once, then choose the best

When you raise the average score, increase production, there will naturally be "surprise samples" of 85 points, 90 points in the distribution. Many times, it's not about "that moment of inspiration", but that you finally start working statistically.

Principle 11: Don't Do What Can Be Done by Others — Command, Taste, and Have It Redone Like an Executive Chef

If you are the executive chef of a restaurant, you won't personally go and chop cucumbers. You will:

· Take a bite

· Judge if it's acceptable

· Give clear feedback (what's wrong, how to improve)

· Have the chef redo it

Collaborating with AI is the same. You must respect its subjectivity in "generating in its own way" — what you need to do is teach it how to meet your standards, rather than jumping in yourself to refine its results every time.

Otherwise, you will be worn out by endless "touch-ups".

The final foundational principle: Return to the Real World — Material × Taste, Determine the Ceiling of the Work

In the AI era, the quality of a work is becoming more and more like: Material × Taste.

Models will change, methods will iterate, but these two things remain constant:

1. Materials come from the real world


If you were given two choices to write an article:

· Use the latest model, but only with online data

· Use an older model, but you have a full archive, oral history, on-site interviews


The one more likely to produce good work is often the latter.

2. Taste Comes from Long-Term Training


When "generation" becomes cheap, what is truly scarce is:

· You know what is worth writing

· You know which evidence is stronger

· You know which narrative is more powerful

· You are willing to put in physical work for the material: travel far and wide, get hands-on with the sources

What AI changes is the efficiency and manner in which you interact with the material; however, you are still the subject of the work, and the material remains the object. AI is just part of the "verb."

Conclusion: Turn Anxiety into Tactility

Many people cannot afford to use AI, not because they are not smart, but because they have been stuck in a loop of "wish-disappointment-give up." What truly helps you cross this barrier is treating it as a workbench, engineering tasks, turning processes into transparent systems, and gradually developing a feel through constant friction.

When you can do this, you will not hastily jump to the conclusion that "AI doesn't work"; you will resemble more of a new type of worker who can manage new tools: neither looking down on it nor looking up to it, but placing it within the process, within reality, within the works you are willing to sign your name to.

Original Article Link

You may also like

Popular coins

Latest Crypto News

Read more