crescine
Work Package 7 · Task 7.3

Understanding Audiences
Beyond Sentiment

A guide to CineFlux—a new instrument for mapping how audiences psychologically engage with cinema, built for reception researchers who want more than positive or negative.
Explore the nine blocks
February 2026
A note on sources

This report is a companion to the CineFlux technical deliverable (CRESCINE WP7, Task 7.3). The goal is to provide the full scholarly apparatus: literature review, formal definitions, mathematical formulations, and validation methodology in using a storytelling—less technical—tutorial approach. Where this guide references established psychological frameworks, it names the key scholars inline. For full citations, theoretical grounding, and technical detail, the reader is referred to the deliverable document.

Block 1

What audience data tell us, and what we've been missing

Reception research has powerful tools for understanding audiences. But when it comes to the millions of reviews people write about films every year, we've been reading them with blunt instruments.

Audience researchers know that people don't just like or dislike a film. A viewer who calls a war drama "devastating but necessary" is doing something far more complex than expressing a preference. They're making a moral judgment about the story. They're reporting an emotional experience. They're positioning themselves in relation to the film's worldview. All of that is packed into four words.

And yet, when we analyse audience reviews at scale—the hundreds or thousands of responses that accumulate on platforms like Letterboxd, IMDb, or Rotten Tomatoes—we mostly reduce them to a single axis: positive or negative. Sentiment analysis, in its many forms, has become the default lens for computational audience research. It tells us how much audiences liked something. But it tells us almost nothing about why, or about the psychological texture of their engagement.

The sentiment ceiling

Sentiment analysis was never designed for the complexity of film reception. It was built for product reviews ("the battery life is great"), financial signals ("the market outlook is bearish"), and customer feedback ("the service was slow"). In those contexts, the positive/negative axis captures most of what matters. A five-star battery review and a one-star battery review differ mainly in valence.

Film reviews are different. Two five-star reviews of the same film may describe entirely different psychological experiences. One reviewer loved a film because it was thrilling and funny. Another loved it because it made them rethink their assumptions about justice. Both are "positive". But they engaged with the film through fundamentally different psychological pathways—one hedonic (pleasure, excitement), one eudaimonic (meaning, reflection). Sentiment analysis sees them as the same. An audience researcher knows they are not.

The same problem appears with negative reviews. A viewer who found a film "boring" and a viewer who found it "morally repugnant" are both negative—but the first is reporting a failure of engagement, while the second is reporting a deep engagement that led to a moral objection. These are opposite ends of a dimension that sentiment analysis cannot see.

The problem isn't that sentiment analysis is wrong. It's that it answers the least interesting question about audience reception: the direction of the thumb. Everything that makes reception research intellectually rich—the what, the why, the psychological specificity—gets lost in the reduction.

What qualitative methods see (but can't scale)

Reception researchers have always known this. Qualitative methods—close reading, focus groups, interviews, ethnographic observation—are built precisely to capture the richness that sentiment analysis discards. A skilled researcher reading a hundred reviews can identify the moral arguments, the emotional registers, the cultural tensions that run through audience response. They can distinguish the viewer who was moved by a film's depiction of injustice from the viewer who was moved by its visual beauty.

But qualitative methods face their own ceiling: scale. A close reading of a hundred reviews is illuminating. A close reading of ten thousand is a career. In an era where a single film generates thousands of written responses across multiple languages and platforms within weeks of release, the richest method of analysis is also the one that cannot keep up with the volume of data.

This creates a gap. On one side: computational methods that can handle scale but sacrifice psychological depth. On the other: qualitative methods that preserve depth but cannot handle scale. Most audience research lives in this gap, using surveys to capture structured psychological data (at the cost of ecological validity) or using sentiment analysis to process organic data (at the cost of psychological resolution).

What's actually in a review

CineFlux starts from a different premise. It asks: what if the psychological depth that qualitative researchers see in reviews could be extracted computationally—not by reducing reviews to a sentiment score, but by identifying the specific psychological signals that are already there?

Consider what a reviewer actually does when they write about a film. They describe events in the story world—characters who suffered, betrayals that shocked them, authorities that were challenged. They report their own experience—what thrilled them, what made them laugh, what kept them thinking afterwards. And they take positions—endorsing or condemning the moral choices characters made, valuing or dismissing the experience the film provided.

These are not random observations. They map onto well-established psychological constructs. When a reviewer says "the film explores linguistic oppression", they are engaging a moral framework about equality and justice. When they say "it was electrifying", they are reporting a hedonic response—specifically, arousal and excitement. When they say "I couldn't stop thinking about it", they are reporting a eudaimonic response—reflection, continued processing.

The signals are already in the text. The question is whether we can read them at scale, with precision, and with transparency about how each extraction decision was made. That is what CineFlux was designed to do.

Back to map
Block 2

Two dimensions of audience response

When audiences respond to a film, several things happen at once. CineFlux focuses on two psychological dimensions: how they moralise the story, and and how they express their subjective experience.

The insight at the heart of CineFlux comes from combining two well-established traditions in psychology that have rarely been brought together in audience research.

The first is moral foundations theory, developed by Jonathan Haidt and colleagues. It proposes that human moral reasoning is not a single capacity but a set of distinct foundations—different "taste receptors" for morality, each sensitive to a different kind of ethical concern. Some people respond strongly to questions of care and harm. Others are more attuned to loyalty, authority, or fairness. These foundations are not conscious ideologies—they are intuitive responses that shape how people evaluate what they see.

The second is the hedonic/eudaimonic framework of motivational orientations, drawing on the work of Mary Beth Oliver, Arthur Raney, and others. It distinguishes two broad pathways through which audiences find value in media. Hedonic motivation is about pleasure, fun, excitement, comfort—the film as enjoyable experience. Eudaimonic motivation is about meaning, insight, reflection—the film as something that deepens understanding or provokes thought. These are not opposites. A single viewer can experience both simultaneously. But they are distinct psychological processes, and they leave different traces in how people write about what they watched.

CineFlux's core move: map moral foundations onto what the reviewer says about the story world, and map motivational orientation onto what the reviewer says about their subjective experience. Same review, two different lenses, two different scopes.

How the research got here

It is worth pausing to say: these two dimensions did not arrive pre-packaged. CineFlux did not begin by selecting moral foundations theory and the hedonic/eudaimonic framework from a textbook and building a tool around them. It began with a wider question: of all the psychological dimensions that reception research has identified, which ones leave readable traces in the text of audience reviews?

The initial landscape was broad. Through a series of technical workshops within the CRESCINE consortium, the research team surveyed established constructs from media psychology and audience studies: narrative engagement and transportation (the sense of being "lost" in a story), character identification (feeling with or as a character), parasocial relationships (the bond viewers form with recurring figures), aesthetic appreciation (responding to form, craft, and beauty), and several others. Each of these is a legitimate, well-documented dimension of audience response. Each has its own literature and measurement instruments.

The question was not which of these are real—they all are—but which of them can be reliably detected in the organic, unstructured text of a film review. A survey can ask a viewer to rate their sense of transportation on a seven-point scale. A review does not come with scales attached. CineFlux needed dimensions that leave linguistic evidence—specific words, phrases, and evaluative patterns that a computational system can identify and assess.

Exploratory analysis on real review corpora—initially for the films Girl with the Needle and Kneecap—helped narrow the field. Using topic modelling to identify recurring thematic clusters in reviews, combined with qualitative scoring of those clusters against candidate psychological dimensions, two families of signals emerged as particularly strong: morality and values, and what the research initially called biases and preconceptions—the cultural, geographical, and identity-based frames through which audiences interpret what they see.

This second family—biases and preconceptions—deserves a moment of attention, because it did not survive into the final model as a separate dimension. In early exploratory work, it scored high: audiences reviewing Kneecap, for instance, brought strong cultural and political frames to their viewing (Irish identity, anti-colonial sentiment, linguistic politics). These frames shaped not just what they noticed in the film but how they evaluated it. The research team initially treated this as a peer to morality—a distinct dimension of response.

As the model matured, this dimension was not so much discarded as redistributed. Cultural and identity-based frames turned out to express themselves through moral foundations: a reviewer's anti-colonial frame surfaces as a loyalty–betrayal or authority–subversion response; their sensitivity to linguistic marginalisation surfaces as an equality response. The biases don't disappear—they shape which moral foundations get activated, and in which direction. Rather than tracking them as a separate axis, CineFlux captures their effects as patterns within the moral foundation activations. This is a design decision with trade-offs, and the research team is aware that future iterations may revisit it.

Similarly, dimensions like narrative transportation and character identification were deferred. The reasons were partly linguistic—these dimensions leave less distinctive traces in review text (a reviewer who experienced deep transportation may simply write "I was completely absorbed", which is difficult to distinguish computationally from generic engagement)—and partly pragmatic. CineFlux was conceived as a minimum viable product: a first working version of an instrument that could be tested, validated, and iteratively extended. For some candidate dimensions, the existing scholarly literature did not provide validation data against which the system's automated extractions could be benchmarked. Rather than ship dimensions that could not be verified, the team chose to start with the dimensions that were both linguistically detectable that can be validated empirically, and to design the architecture so that additional dimensions can be incorporated as validation resources become available.

This is a point worth being explicit about: CineFlux does not claim to capture the full psychology of audience reception. It captures what can currently be extracted with confidence from organic review text. That is a deliberately scoped claim, and the modular architecture of the system means the scope can grow.

The fourteen variables in CineFlux are not the universe of audience psychology. They are the dimensions that the research found to be theoretically grounded, linguistically detectable in organic review text, and amenable to empirical validation against existing data. The model is designed to be extended—and the dimensions that were explored but deferred remain candidates for future versions.

The moral dimension: what happens in the story

When a reviewer engages with the moral dimension of a film, they are responding to events, characters, and norms within the narrative. They might praise a character's loyalty, condemn an act of cruelty, or question whether justice was served. These responses are not about the reviewer's own life—they are about the moral fabric of the story world as the reviewer perceives it.

CineFlux tracks six moral foundations, adapted from Haidt's framework for the specific context of film reception:

Moral Foundations
Care–Harm
Sensitivity to suffering, compassion, and the motivation to protect or help the vulnerable. In reviews: references to characters' well-being, empathy toward those in distress, or concern about cruelty depicted in the story.
Scope: Story World
Equality
The belief that all groups should be treated with equal rights and opportunities. In reviews: references to social justice, discrimination, oppression, inclusivity, or the levelling of hierarchies within the narrative.
Scope: Story World
Proportionality
Fairness based on merit and deservingness—rewards and punishments should match effort and responsibility. In reviews: judgments about whether characters got what they deserved, whether justice was served, or whether outcomes were earned.
Scope: Story World
Loyalty–Betrayal
Allegiance to one's group, community, or nation. In reviews: references to friendship, family bonds, patriotism, group solidarity—or their violation through betrayal, abandonment, or disloyalty.
Scope: Story World
Authority–Subversion
Respect for tradition, leadership, and hierarchy—or the defiance of established power. In reviews: references to obedience, social order, strong leaders, or rebellion, rule-breaking, and the consequences of challenging authority.
Scope: Story World
Sanctity–Purity
Concerns about sacredness, cleanliness, and protection from degradation—often tied to the body, sexuality, or the sacred. In reviews: references to disgust, taboo, moral contamination, innocence, or the boundaries of decency.
Scope: Story World

The experiential dimension: what happens in the viewer

The second dimension captures something different: not what the reviewer thinks about the story, but what the story did to them. This is the domain of motivational orientation—the psychological needs that media consumption can fulfil.

The hedonic pathway is about immediate gratification: pleasure, fun, excitement, emotional relief. The eudaimonic pathway is about deeper processing: meaning-making, reflection, the sense that something has been understood or felt at a level beyond entertainment. CineFlux tracks eight variables across these two pathways:

Motivational Orientation—Hedonic
Humour–Amusement
Laughter, comedic enjoyment, wit, absurdity. The reviewer found the film funny, appreciated its comic timing, or enjoyed its irreverence.
Scope: Reviewer Experience
Thrill–Arousal
Excitement, suspense, tension, adrenaline. The reviewer experienced physiological or emotional activation—the film got their pulse up.
Scope: Reviewer Experience
Comfort–Mood Regulation
Relaxation, escapism, emotional soothing, nostalgic warmth. The reviewer experienced the film as a source of comfort or mood repair.
Scope: Reviewer Experience
Hedonic (other)
A genuine hedonic signal — entertainment, pleasure, diversion, escapism, or enjoyment — is clearly present in the review, but is not specific enough to be confidently assigned to humour–amusement, thrill–arousal, or comfort–mood regulation. The catch-all variable ensures no hedonic signal is lost in classification.
Scope: Reviewer Experience
Motivational Orientation—Eudaimonic
Reflection–Rumination
Continued thinking, re-evaluation, dwelling on themes after viewing. The film stayed with the reviewer—they kept processing it.
Scope: Reviewer Experience
Insight–Illumination
New understanding, a perspective shift, a revelation. The reviewer saw something differently because of the film—an "aha" moment.
Scope: Reviewer Experience
Meaning–Appreciation
A sense of significance, depth, or value. The reviewer appreciated the film for touching something important—artistic truth, human experience, or existential weight.
Scope: Reviewer Experience
Eudaimonic (other)
A genuine eudaimonic signal — the reviewer finds the film meaningful, worthwhile, important, or moving in a deeper sense — is clearly present, but is not specific enough to be confidently assigned to reflection–rumination, insight–illumination, or meaning–appreciation. The catch-all variable ensures no eudaimonic signal falls through the classification net.
Scope: Reviewer Experience

Why the scope distinction matters

This is perhaps the most important structural decision in CineFlux, and it is easy to overlook. When a reviewer writes "the protagonist's betrayal was devastating", the loyalty–betrayal dimension is activated—and it is about the story world. The betrayal is a narrative event. The reviewer is moralizing something that happened in the film.

When the same reviewer writes "I couldn't stop thinking about it afterwards", the reflection–rumination dimension is activated—and it is about the reviewer's experience. The continued thinking is something that happened in the viewer's mind, not in the plot.

These two statements might appear in the same sentence. They might even feel like the same response. But they operate at different levels: one is a moral evaluation of fictional content, the other is a report on psychological processing. Confusing them—treating "the betrayal was devastating" as a viewer emotion rather than a story-world judgment, or treating "I kept thinking about it" as a comment about the plot rather than about the viewer's own mind—collapses a distinction that reception research needs to preserve.

Moral foundations tell you what the audience sees in the story. Motivational orientation tells you what the story does to the audience. A film can activate the same moral foundation in two viewers and produce entirely different experiential responses—one finds it thrilling, the other finds it deeply unsettling. CineFlux is designed to capture both sides of that equation.

Fourteen variables, not one

Together, this gives CineFlux a vocabulary of fourteen psychological variables—six moral foundations operating over the story world, and eight motivational orientations operating over the viewer's experience. Each variable is independently assessed. Each can be activated or not. Each can carry a directional stance.

This is not a taxonomy for the sake of taxonomizing. It is a response to a specific analytical need: when an audience researcher asks "how did audiences respond to this film?", the answer should not be a single number. It should be a pattern—a profile of which psychological dimensions were engaged, how strongly, and in which direction. That pattern is what CineFlux calls the reception landscape, and it is what the remaining blocks of this guide will show you how to read.

Back to map
Block 3

What a review looks like through CineFlux's eyes

Words on a screen are not signals. They become signals only once a system knows what to look for—and has principled rules for finding it. This block follows a single Letterboxd review of Kneecap through every step of CineFlux's analysis, from plain text to a structured profile of eleven psychological dimensions.

The review, as written

The following short review was posted on Letterboxd shortly after Kneecap's release. It is 88 words long. It is worth reading it once, as a reader, before we look at what CineFlux finds in it.

Tiocfaidh ár lá, GET THE BRITS OUT!! FUN and ELECTRIFYING, energetic, and passionate film about a band that uses music as a political weapon. explores linguistic oppression in Ireland – Every word of Irish spoken is a bullet fired for Irish freedom. The power of art and music (and specifically this band) to help preserve and bolster a language on the brink of cultural extinction and the power art holds in instigating political and cultural change is brilliantly showcased here.

A competent sentiment classifier would call this very positive. A star rating would probably be five stars. Neither reading is wrong—but neither is very informative to a reception researcher who wants to understand how this audience member engaged with the film.

The same review, as CineFlux reads it

Below is the same review, with the passages that activated each of CineFlux's psychological dimensions highlighted by dimension family. Passages that carry multiple overlapping signals are shown with the dominant family—the per-variable cards that follow identify every active dimension individually.

Tiocfaidh ár lá, GET THE BRITS OUT!! FUN and ELECTRIFYING, energetic, and passionate film about a band that uses music as a political weapon. explores linguistic oppression in IrelandEvery word of Irish spoken is a bullet fired for Irish freedom. The power of art and music (and specifically this band) to help preserve and bolster a language on the brink of cultural extinction and the power art holds in instigating political and cultural change is brilliantly showcased here.
Moral Foundations—story world
Motivational Orientation—hedonic
Motivational Orientation—eudaimonic

Eleven signals, from fourteen possible

Of CineFlux's fourteen variables, eleven were activated by this review—across all three dimension families. Three variables found no evidence in the text. The cards below walk through each active variable in turn, showing the triggering passage and the reasoning at each gate of the extraction process. (The extraction gates are explained in detail in Block 4; for now, treat Trigger, Appraisal, and Stance as the three questions the system answers for each variable.)

Moral Foundations — Story World
Loyalty – Betrayal
Tiocfaidh ár lá, GET THE BRITS OUT!!
Trigger
Opens with an Irish republican rallying cry; the film is immediately framed as being about in-group solidarity against an external power.
Appraisal
The reviewer reads the story as one in which group loyalty is the central moral axis—loyalty to language, community, and cause.
Stance
positive—solidarity is celebrated; the in-group and its loyalties are affirmed.
Care – Harm
help preserve and bolster a language on the brink of cultural extinction
Trigger
The language is described as endangered—something vulnerable that requires care to survive. This invokes the care-harm axis.
Appraisal
The reviewer frames the film as enacting protective care—the band, and the film itself, are agents of preservation against threat.
Stance
positive—the care and protection are approved; the reviewer endorses the film's protective stance.
Equality – Oppression
explores linguistic oppression in Ireland
Trigger
The word "oppression" names an inequality directly. The film is read as engaging with a fairness violation rather than a personal grievance.
Appraisal
The reviewer frames this as structural injustice—a group denied the right to its language—rather than individual bad behaviour.
Stance
negative—oppression is the dimension's negative pole; the film shows inequality, and the reviewer acknowledges it as such. Note: this is a variable-relative stance, not a value judgment on the reviewer's approval of the film.
Authority – Subversion
Every word of Irish spoken is a bullet fired for Irish freedom
Trigger
The metaphor of language as a weapon against authority is explicit. Speaking Irish is coded as an act of resistance against an occupying power.
Appraisal
The reviewer endorses this framing—the subversion is not merely noted but celebrated as the moral core of the film's project.
Stance
positive—subversion is endorsed. In this variable, "positive" means supporting the challenge to authority, not supporting the authority itself.
Motivational Orientation — Hedonic
Humour – Amusement
FUN
Trigger
A single emphatic word in uppercase—this is as direct a trigger signal as exists in the corpus. The capitalization itself carries evaluative weight.
Appraisal
The reviewer is reporting a direct first-person experience of comedic pleasure, not describing the film's genre. The experience is in the reviewer, not the story.
Stance
positive—amusement is present and enjoyed; the reviewer's experience of humour is reported affirmatively.
Thrill – Arousal
ELECTRIFYING, energetic, and passionate
Trigger
"Electrifying" and "energetic" are physiological-arousal terms—they describe a bodily or emotional activation state, not a narrative content.
Appraisal
Three consecutive arousal terms signal that the reviewer is reporting on their own activated state—the film got their pulse up. This belongs to reviewer experience, not story evaluation.
Stance
positive—the arousal is experienced as pleasurable and energising.
Comfort – Mood Regulation
Tiocfaidh ár lá, GET THE BRITS OUT!! FUN and ELECTRIFYING
Trigger
The review opens with an affectively charged communal exclamation followed by immediate euphoria. The tone is celebratory and uplifting.
Appraisal
The overall emotional register suggests the film served a mood-elevating function for this reviewer—the experience was invigorating rather than demanding or challenging.
Stance
positive—the reviewer experienced mood uplift. This is among the weaker activations in this review; the signal is present but diffuse.
Hedonic (other)
FUN and ELECTRIFYING, energetic, and passionate film
Trigger
Multiple enthusiasm markers in rapid succession signal a genuine hedonic engagement — entertainment and pleasure are clearly present in the reviewer's experience.
Appraisal
The signal is real, but not specific enough to confidently assign to humour–amusement, thrill–arousal, or comfort–mood regulation in isolation. Hedonic generic is the classifier's catch-all for exactly this case — no hedonic signal should be lost.
Stance
positive — the hedonic engagement is welcomed and affirmed.
Motivational Orientation — Eudaimonic
Meaning – Appreciation
the power art holds in instigating political and cultural change is brilliantly showcased here
Trigger
An explicit statement about art's capacity for social transformation. The reviewer is articulating why the film matters—not just what it does.
Appraisal
This is a meaning-making statement: the reviewer found the film significant beyond its narrative or entertainment value. The word "brilliantly" signals high appreciation.
Stance
positive—the film's meaning is received as genuinely illuminating and valuable.
Insight – Illumination
Every word of Irish spoken is a bullet fired for Irish freedom
Trigger
The bullet metaphor is not just a political statement—it reframes something familiar (speaking a language) as something new (an act of resistance). This is the hallmark of illumination.
Appraisal
The reviewer has been given or has arrived at a new way of seeing. The film produced a conceptual shift: language-as-resistance is now legible where it may not have been before.
Stance
positive—the insight is received and endorsed; the reviewer is not disturbed or destabilised by the new framing.
Eudaimonic (other)
the power art holds in instigating political and cultural change is brilliantly showcased here
Trigger
The entire second half of the review — the sustained articulation of art's political and cultural significance — signals a genuine eudaimonic engagement: meaningful, important, moving in a deeper sense.
Appraisal
The eudaimonic signal is real and strong, but spans multiple themes — preservation, political change, cultural survival — making confident assignment to a single sub-dimension (reflection, insight, or meaning) unreliable. Eudaimonic generic captures it as the classifier's catch-all.
Stance
positive — the reviewer's eudaimonic engagement is affirmed throughout the review.

What CineFlux did not find

Three variables found no activating text in this review. Their absence is as informative as any activation.

Purity – Sanctity
Fairness – Cheating
Reflection – Rumination

There is no language invoking sacredness, defilement, or moral contamination—the register is political and communal, not sacred. There is no mention of rules being broken, reciprocity violated, or cheating in any form. And although the review articulates meaning clearly, it does not report that the film stayed with the reviewer, prompted continued thinking, or left unresolved feelings after viewing—so reflection–rumination remains inactive.

A profile that is strong on moral foundations (four active MF variables) and eudaimonic meaning, with hedonic signals present but secondary, and no sacred-register activation—this is not a "positive review". It is a recognisable type of engagement. CineFlux names it as such.

This review scores in the highest tier on moral foundations—the reviewer is doing moral evaluation of a political story. But notice that the eudaimonic signals are also strong, and they are different in kind: the meaning-appreciation and insight-illumination activations are not about the story's morality but about what the film did to the reviewer's understanding. CineFlux preserves that distinction. Sentiment analysis does not.
Back to map
Block 4

How CineFlux extracts these signals—the TAS protocol

Finding a psychological signal in a text is not the same as asserting that the signal is present. CineFlux uses a three-gate architecture—Trigger, Appraisal, Stance—designed to minimise false positives and produce externally verifiable extractions. This block explains how that architecture works, and why each gate is necessary.

The challenge facing any system that reads psychological content from text is the same challenge facing a human analyst: words are ambiguous. The word "betrayal" might appear in a review that is doing something entirely different from moral evaluation—a factual plot summary, a metaphorical complaint about a sequel's quality, a passing allusion. Treating every occurrence of a morally loaded word as an activation of the corresponding variable would produce a useless instrument, flooded with false positives.

CineFlux's answer is a three-gate extraction protocol. Each gate is a necessary condition. A variable is activated only if all three gates pass. If any gate fails, the variable receives a zero score—even if the triggering language is present.

The three gates

Gate 1
Trigger
Is there language in the review that is
consistent with this variable?
Gate 2
Appraisal
Does the reviewer evaluate or interpret
that content—not merely describe it?
Gate 3
Stance
In which direction is the variable
activated—positive, negative, or neutral?

Gate 1—Trigger asks whether there is any textual evidence at all. A review that contains no language touching on care, harm, fairness, or authority cannot activate the corresponding variables, regardless of how much the reviewer might have thought about those things. CineFlux works only from text.

Gate 2—Appraisal is the gate that separates evaluation from description. A reviewer who writes "the film depicts corruption in a government ministry" is summarising a plot point, not activating the fairness–cheating variable. Appraisal requires that the reviewer not merely report what happens but take a stance—implicitly or explicitly—on its moral or emotional significance. "The film depicts corruption, and it is damning" passes. "The film depicts corruption" does not.

Gate 3—Stance determines the direction. For moral foundation variables, stance is variable-relative: "positive" for authority–subversion means the subversion is endorsed, not that the reviewer felt positively in a generic sense. For motivational orientation variables, stance is closer to the reviewer's expressed affect: positive means the experience was welcome; negative means it was aversive or distressing. Neutral is reserved for cases where a dimension is activated but no directional signal can be reliably extracted from the text.

The stance gate matters most for moral foundations. A reviewer who finds a film's depiction of racial inequality "uncomfortable and important" is activating the equality variable—but the stance is not simply negative (inequality shown) or positive (justice affirmed). The discomfort and the importance pull in different directions. CineFlux records the dominant textual signal; where genuine ambivalence is present, the annotation captures it.

A worked example: reading between the emotional and the moral

One of the most instructive tests for the TAS protocol is a sentence that appears morally clear on first reading but rewards closer analysis. Consider this line from a review of Kneecap:

I was so happy to see that every word of Irish was a bullet fired for Irish freedom.
TAS walkthrough—authority–subversion
Trigger
"A bullet fired for Irish freedom"—language-as-weapon against an occupying authority. The metaphor explicitly frames speech acts as resistance against power. The trigger is unambiguous.
Gate 1: pass
Appraisal
"I was so happy to see" is an evaluative frame—the reviewer is not reporting a plot event but expressing an emotional response to a moral stance they observed in the film. The word "so" amplifies the appraisal. This is not a description; it is an endorsement.
Gate 2: pass
Stance
The happiness is directed at the subversive act—at the act of resistance. The stance is positive for subversion: the reviewer endorses the challenge to authority. Note that "happy" here is not evidence for the humour–amusement variable. The emotion is about the political act, not a report on comedic enjoyment. CineFlux tracks which variable the emotion is modifying.
Gate 3: pass—stance: positive (subversion endorsed)

The critical distinction here is between emotion as content and emotion as appraisal. "Happy" does not automatically activate humour–amusement. It activates whatever variable it is evaluating. In this sentence, the happiness is a value judgment about a moral stance—it belongs to authority–subversion, not to hedonic motivation.

When a gate fails: partial activation

Not all three gates need to pass for the system to record something. CineFlux uses a weighted scoring formula that reflects how far through the three-gate chain an annotation reaches. A trigger alone contributes a small activation weight. A trigger plus appraisal contributes more. Full passage through all three gates—trigger, appraisal, and directional stance—contributes the maximum.

Consider a reviewer who writes: "There's a lot of stuff about Irish and British politics, I guess". The trigger gate might pass for authority–subversion (politics, power, national identities are present), but the appraisal gate would fail—the reviewer is not evaluating the moral content, merely acknowledging it exists. The variable receives a partial activation weight reflecting the trigger alone, and no stance is recorded. The output is not zero, but it is substantially lower than a full three-gate activation—and it is clearly distinguished from a case where the reviewer engaged with the material deeply.

The activation formula

Each variable's activation score is calculated as a weighted sum of the three gate outcomes:

activation = 0.2 × I(trigger) + 0.4 × I(appraisal) + 0.4 × I(stance ≠ neutral)

Where I(condition) equals 1 if the condition is met and 0 if not. A trigger alone scores 0.2. Trigger plus appraisal scores 0.6. Full passage through all three gates scores 1.0. Multiplying by the stance sign (±1) gives a signed activation score that carries directional information.

The weights were calibrated deliberately: the appraisal and stance gates together account for 80% of the score, reflecting the model's commitment to rewarding genuine psychological engagement over mere textual proximity to relevant vocabulary.

Leaving a paper trail: how CineFlux documents its reasoning

One of the most consequential design decisions in CineFlux is that every annotation is externally verifiable. The output is not simply a set of activation scores. For every activated variable in every review, the system records three things alongside the score:

First, the verbatim text that triggered the annotation, identified by its exact character position in the source review (a start index and an end index). Any researcher can pull up the original review and verify that the annotated passage says precisely what the annotation claims it says. There is no black box.

Second, a justification—a short natural-language statement of why the passage was assigned to this variable at this gate. The justification is not a summary of the text; it is a piece of reasoning that connects the text to the variable definition. For the authority–subversion example above, the justification might read: "language-as-weapon metaphor frames speech as resistance against occupying power—explicit subversion of political authority". That reasoning can be evaluated, challenged, or compared across annotations.

Third, a confidence indicator: a signal of whether the annotation is classified as defensible (the evidence is strong and the reasoning is clear) or flagged for review. Annotations in the flagged category are not discarded, but they are tracked separately and their contribution to aggregate scores is visible.

This architecture is designed for researchers, not just for pipelines. The character indices, justifications, and confidence levels mean that a reception scholar can open the output file alongside the original review and follow every single annotation back to its source. When CineFlux says a review activated authority–subversion with a positive stance, it shows exactly which words triggered that conclusion and explains why. That transparency is not optional—it is the condition under which the system's outputs can be trusted and responsibly used.

Scope as a decision rule

Block 2 introduced the distinction between story-world scope (moral foundations) and reviewer-experience scope (motivational orientation). In the TAS protocol, scope is not an interpretive afterthought—it is a decision rule that applies at the appraisal gate.

When a passage activates a potential moral foundation signal, the appraisal check asks: is the reviewer moralising about the story's world? If the reviewer writes "the protagonist's choice to betray his comrades was cowardly", they are passing judgment on a fictional character in a fictional situation. That is story-world scope. It activates loyalty–betrayal.

If instead the reviewer writes "this film betrayed my trust as a viewer by abandoning its own premises", they are moralising about their own experience of watching. That is reviewer-experience scope. The word "betrayal" is present, but loyalty–betrayal is not activated—because the reviewer is not evaluating the story's moral content. The appropriate dimension to check would be eudaimonic generic (a disrupted or frustrated viewing experience).

This distinction matters because conflating the two produces uninterpretable aggregate data. If the loyalty–betrayal scores for a film pool together reviewers who found narrative betrayals meaningful and reviewers who felt personally let down by the filmmaking, the resulting number does not measure anything coherent. CineFlux's scope rule prevents that conflation from entering the data.

The scope rule is one of the things that makes CineFlux genuinely different from a topic model or a keyword classifier. A topic model sees the word "betrayal" and assigns it to the betrayal topic—it does not ask whose experience the sentence is about. CineFlux asks precisely that question at every appraisal gate, and gives different answers depending on whether the moralising points inward (at the reviewer) or outward (at the story).
Back to map
Block 5

From one review to many: scoring and aggregation

The isa review gave us eleven signals from a single voice. But one review is not an audience. CineFlux's real analytical value emerges when a cohort of reviews is processed together and individual TAS annotations are compressed into a film-level profile. This block explains how that compression works—and what it preserves, and what it hides.

The cohort

CineFlux calls the set of reviews being analysed together a cohort, denoted C. In the simplest case, C is all reviews for a single film. In more targeted analyses, C might be reviews from a specific platform, country of origin, viewing context, or audience source—public organic reviews versus the structured responses of a recruited audience segment. CineFlux makes no assumption about how the cohort is constituted, as long as it is specified in advance. When two cohorts for the same film are available, their profiles can be compared directly: this comparison is the basis of CineFlux's divergence analysis.

The aggregation stage takes as its input the per-review activation scores αr,y and signed orientation scores σr,y computed by the TAS protocol for every review r and every variable y. It produces three statistics for each variable across the cohort. Together, those three statistics constitute a complete quantitative summary of how that variable was engaged in the cohort.

A worked example: five reviewers, one variable

The following table reproduces the illustrative worked example from the technical deliverable (§7.4.6), tracing the variable humour–amusement through a five-review cohort. The example was constructed to show the full range of TAS gate outcomes in a single cohort—from full activation to no activation, and from positive to negative stance.

Worked example—humour–amusement, cohort of five reviews
Review extract Trigger Appraisal Stance Activation α Signed σ
"A blast. So much fun". 1 1 pos 1.0 +1.0
"Fun in places, mostly tedious". 1 1 neg 1.0 −1.0
"Light entertainment". 1 0 neu 0.2 0
[Plot summary only—no experiential content] 0 0 0 0
"Entertaining enough". 1 1 pos 1.0 +1.0
Cohort totals (n = 5) Σ = 3.2 Σ = +1.0

Source: technical deliverable §7.4.6. Trigger, appraisal and stance values are binary/categorical outputs of the TAS protocol; activation and signed scores are derived according to §7.4.2.

The first question: how present is this variable?

The mean activation āC,y answers the most basic question a researcher can ask about a cohort: on average, how much did this variable appear across reviews? It is computed as the arithmetic mean of all activation scores in the cohort:

āC,y = (1 / nC) × ∑ αr,y ∈ [0, 1]

In the worked example: ā = 3.2 / 5 = 0.64. This means humour–amusement was present to a moderate-to-high degree across this cohort. The variable was active—it contributed meaningfully to several reviews.

Notice that this figure already carries more information than a simple "present / absent" flag. A variable that fires at full strength in half the cohort and is absent in the other half yields ā = 0.50—the same mean activation as a variable that fires weakly in every review. These are different patterns, but ā summarises them at the same level. This is a deliberate simplification; it is addressed by reading ā alongside the other two statistics.

The second question: which way does the cohort lean?

The mean signed orientation σ̄C,y answers the directional question. For each review, the signed score σ is the activation score multiplied by the stance sign: +1 for a positive stance, −1 for a negative stance, 0 for a neutral or absent stance. The cohort mean of those signed scores gives the net lean:

σ̄C,y = (1 / nC) × ∑ σr,y ∈ [−1, +1]

In the worked example: σ̄ = (+1.0 − 1.0 + 0 + 0 + +1.0) / 5 = +0.20. The cohort leans mildly positive on humour–amusement: more reviewers experienced the comedic content as welcome than aversive. But notice the denominator is 5, not the 3 reviews that actually had a directional stance. Reviews with no trigger and reviews with a neutral stance both contribute 0 to the numerator—they dilute the orientation signal rather than being excluded from it. This is intentional: σ̄ is a cohort statistic, not a statistic of engaged reviewers only.

The polarisation trap—why ā and σ̄ must be read together

The most important interpretive principle in CineFlux's aggregation stage is one that is easy to miss: a mean signed orientation near zero does not imply that the variable is unimportant or unanimously neutral.

Consider two contrasting cohorts for the same variable. In Cohort A, no reviewer triggers the variable at all—ā = 0, σ̄ = 0. In Cohort B, half the reviewers activate the variable strongly with a positive stance and half with a negative stance—ā = 1.0, σ̄ = 0. Both cohorts produce σ̄ ≈ 0. But they describe opposite situations: silence versus polarisation. A researcher who reads σ̄ alone and concludes "no engagement" in both cases has drawn the wrong conclusion from Cohort B.

The deliverable formalises this with a third statistic: the proportion of directional stance MC,y, computed as the mean of the absolute signed scores:

MC,y = (1 / nC) × ∑ |σr,y| ∈ [0, 1]

M strips direction away and asks: regardless of which way reviewers leaned, how strongly did they lean at all? In the worked example: M = (1.0 + 1.0 + 0 + 0 + 1.0) / 5 = 0.60. Three of five reviews expressed a clear directional stance; two did not. The variable is genuinely contested in this cohort—not ignored.

The three statistics together produce a compact but complete description of the cohort's engagement with each variable:

Mean activation — ā
0.64
Variable broadly present across the cohort. More than half of reviews show at least partial engagement.
Mean signed orientation — σ̄
+0.20
Mild net-positive lean. The majority of directional stances favour the variable's positive pole—but the lean is not strong.
Directional stance proportion — M
0.60
Most reviewers who engaged took a clear directional stance. The near-zero σ̄ reflects a divided cohort, not a neutral one.
Reading the three statistics together: this is a variable that is broadly present (ā = 0.64), leans mildly positive on balance (σ̄ = +0.20), but contains genuine internal division (M = 0.60 with one strongly negative reviewer). The cohort did not unanimously enjoy the humour—it was a contested aspect of their experience. None of this is visible from ā or σ̄ alone.

The active threshold

CineFlux treats a variable as meaningfully active in a cohort when ā ≥ 0.5. Below this threshold, whatever directional signal exists is considered too weak to anchor an interpretive claim. A variable with ā = 0.18 and σ̄ = −0.90 has a very strong apparent negative lean—but so few reviewers engaged with it, and so weakly, that the orientation is not decision-relevant. The active threshold is a pipeline parameter: it is declared in the output metadata and can be adjusted by researchers with specific analytical needs.

The threshold reflects an important epistemic principle: in CineFlux, the existence of an engagement (ā) is prior to its direction (σ̄). You cannot reliably characterise a cohort's orientation towards a dimension the cohort barely noticed.

Dimension-level aggregation

The fourteen variable-level profiles roll up into three dimension-level summaries—Moral Foundations, Hedonic Motivational Orientation, and Eudaimonic Motivational Orientation—by taking the normalised mean across the variables in each dimension. This gives the compact three-number profile that stakeholders can read at a glance: which of the three families of psychological engagement did this film's audience mobilise most?

The variable-level scores remain available in the output alongside the dimension-level summaries. The dimension summary tells you where the audience's engagement was concentrated; the variable scores tell you how—which specific psychological content within each family was activated, and in which direction. Neither level replaces the other.

What this looks like across a real cohort

As a concrete illustration: among a sample of five reviews of Kneecap drawn from the CineFlux output, the group-level classifications assigned by the model's cohort-summary stage show clear within-film variation:

Five Kneecap reviews — group compass classifications
Reviewer MF level Hedonic level Eudaimonic level Variables active
isa HIGH MED MED 11 / 14
LakeFoundExit HIGH 5 / 14
msbeacic MED 2 / 14
Tuesday Night Movie Night LOW MED 3 / 14
Tom Morton LOW MED 3 / 14

Source: CineFlux model output for Kneecap (tt27367464). Group compass classifications (high/med/low) are derived from per-reviewer activation vectors relative to cohort norms. Dashes indicate that dimension was below the active threshold for that reviewer.

Even across this small sample, the variation is striking. Two reviewers engaged heavily with moral foundations—both rated Kneecap's political and ethical content as the film's primary mode of address. Two others barely touched moral foundations and responded primarily through hedonic motivation: this was a pleasurable, entertaining film. One reviewer (msbeacic) fell in the middle, with a thin but present moral engagement.

When these profiles are aggregated to the film level, the mean activation across all three dimensions will reflect this spread. The MF mean activation will be pulled up by isa and LakeFoundExit and pulled down by Tom Morton and Tuesday Night Movie Night. The hedonic mean will capture the reviewers for whom the entertainment value was the dominant experience. The pattern that emerges is not a consensus—it is a distribution. Understanding that distribution is what CineFlux is designed to enable.

All aggregate statistics in CineFlux are deterministic summaries of auditable per-review annotations. Every mean activation score can be traced back through the cohort to the individual reviews that contributed to it, and from each review to the specific text spans and justifications that produced its variable scores. The aggregate is not a black box—it is a compression of verifiable evidence, and the evidence is always available for inspection.
Back to map
Block 6

Reading a film's reception profile

The aggregation stage produces a structured profile: a stance-weighted bar for each variable, three dimension-level summaries, and a coverage count. This block shows what that profile looks like — and how to read it correctly — using the full Kneecap corpus from the CineFlux tool.

Two layers in every profile

Every CineFlux film profile has two reading layers. The first is the variable layer: fourteen independent signals, one per psychological dimension, each shown as a divergent bar. The bar extends to the right when reviewers who engaged with that variable leaned positive; it extends to the left when they leaned negative. Bar width reflects how strong and widespread that directional engagement was. Variables with no meaningful engagement show no bar at all.

The second is the dimension layer: three aggregate summaries — Moral Foundations, Hedonic Motivational Orientation, Eudaimonic Motivational Orientation — that compress the fourteen variables into a family-level signature. Because the three families contain different numbers of variables (six, four, and four respectively), dimension scores are normalized by variable count before comparison. This ensures that MF does not appear artificially large simply because it has more variables.

The variable-level profile

The chart below reproduces the divergent-bar format used in the CineFlux tool. Each row represents one variable. Bars extending right show positive-stance activation; bars extending left show negative-stance activation. The centre line is the zero point. This is the full Kneecap corpus.

Variable-level stance profile — Kneecap (full corpus, CineFlux tool)
Moral Foundations — Story World  ·  5 of 6 variables present
Care – Harm
Equality – Oppression
Loyalty – Betrayal
Authority – Subversion
Sanctity – Purity
Motivational Orientation — Hedonic  ·  4 of 4 variables present
Humour – Amusement
Thrill – Arousal
Comfort – Mood Regulation
Hedonic (other)
Motivational Orientation — Eudaimonic  ·  4 of 4 variables present
Reflection – Rumination
Insight – Illumination
Meaning – Appreciation
Eudaimonic (other)
Variable order follows the canonical sequence in the technical deliverable (§7.4.3). Bars extending right = positive-stance activation; bars extending left = negative-stance activation. Bar proportions read from the CineFlux tool dashboard (full Kneecap corpus). This format mirrors the stance-profile view in the CineFlux tool.

Reading the stance bars

The most immediate thing the variable-level chart communicates is directionality. Almost every bar in this profile extends to the right — positive-stance activation dominates across all three dimension families. The one exception is immediately visible: equality extends to the left.

This is not a sign of audience disapproval. As established in Block 4, stance is always variable-relative. For equality–oppression, a leftward (negative) bar means reviewers engaged with this film through the lens of injustice and oppression — they named it, identified it as a theme, and the film's depiction of linguistic oppression registered in their writing. They praised the film for taking it seriously. The negative stance reflects the variable's negative pole (oppression rather than equality affirmed), not the reviewer's negative attitude toward the film.

Authority–subversion shows a short leftward segment alongside a longer rightward bar. This means the corpus contains a mix: most reviewers who engaged with this variable endorsed the subversion (rightward), but a small number oriented toward the authority pole — perhaps reading the band's confrontational stance with ambivalence rather than celebration.

The divergent format makes audience ambivalence visible at a glance. A variable with roughly equal bars in both directions — a symmetric divergent profile — signals genuine polarisation: the audience engaged with that dimension but split on how to orient toward it. A variable with a single bar in one direction signals consensus. The shape of the bars tells a different story from their length alone.

The dimension-level summary

Compressing the fourteen variables into three dimension-level scores produces the compact signature visible at the top of the tool's profile panel. The normalized mean activations (ACX) for the full Kneecap corpus are:

Moral Foundations
29%
Mean across 6 variables
Story-world scope
Hedonic MO
34%
Mean across 4 variables
Reviewer-experience scope
Eudaimonic MO
35%
Mean across 4 variables
Reviewer-experience scope

Source: CineFlux tool dashboard, full Kneecap corpus. Percentages are normalized dimension-level mean activations ACX (§7.4.4), normalized by variable count per dimension (6, 4, 4).

Three numbers that are almost identical: 35%, 34%, 29%. Eudaimonic and hedonic are essentially tied; moral foundations is a few points behind. This is a genuinely unusual profile — and a genuinely informative one.

Kneecap is not primarily a moral text, nor primarily an entertainment, nor primarily a film that provokes deep reflection. It is all three at similar intensities. Reviewers who engaged eudaimonically — reporting meaning, insight, or a sense that the film mattered — did so at essentially the same rate as reviewers who engaged hedonically, reporting fun, thrill, and entertainment. And a substantial proportion of the corpus engaged morally, reading the film's political content through the lens of loyalty, authority, and care for a threatened culture.

A sentiment score would report: audiences loved it. A genre classifier might note: music biopic, political comedy. CineFlux says something different and more specific: this film activates all three fundamental modes of psychological engagement at nearly equal intensity, with moral foundations slightly behind the experiential dimensions. It is the kind of film that means something to people and entertains them in roughly equal measure — and that reading of it is grounded in what reviewers actually wrote.

What the profile cannot tell you

Two important limits apply to everything in a CineFlux profile.

First: the profile is a cohort summary, not a causal claim. A strong bar on authority–subversion means that reviewers who engaged with that variable leaned toward endorsing subversion. It does not mean the film caused that orientation, or that the filmmaker intended it, or that all future audiences will respond the same way. The profile describes a body of written reviews — what reviewers chose to write is a filtered and partial expression of what they experienced.

Second: a missing bar is not evidence of absence. A variable with no visible bar means reviewers did not write about that dimension — not that they did not experience it. Proportionality–cheating has no bar in the Kneecap profile. This tells us the film's reception corpus does not contain reliable proportionality or cheating language. It does not tell us audiences felt no sense of fairness or violation. Written reviews are a proxy for psychological experience; CineFlux reads the proxy, not the experience itself.

A reception profile does not answer the question "what did the audience feel?" It answers the question "what did the audience write, and what psychological dimensions does that writing engage?" That is a more modest claim — and a more defensible one. The value of CineFlux is not that it reads minds. It is that it reads text carefully, at scale, with a principled framework, and produces a structured account of the psychological landscape reviewers chose to inhabit when they described their experience of a film.

Third: everything shown in this profile comes from a single audience source — organically generated public reviews from Letterboxd. This is the public baseline: an ambient picture of how the general reviewing public received the film. CineFlux was built to go further. When a second cohort is available — a recruited audience segment whose responses have been processed through the same annotation pipeline — the two profiles can be set side by side. The gap between them, measured variable by variable, is what CineFlux calls divergence. That comparison, and what it enables for funding and circulation decisions, is the subject of the next block.

Back to map
Block 7

When audiences diverge

A reception profile is a portrait of one audience. CineFlux was built to answer a different question: when a second audience watches the same film, do they receive it the same way? And if not — where does their reception diverge, by how much, and what does that gap mean?

One film, two audiences

Everything in Blocks 5 and 6 described the aggregated profile of a single cohort: the public Letterboxd corpus for Kneecap. That profile tells us how the ambient reviewing public — people who chose to watch the film and write about it — engaged with its psychological content. It is a useful baseline. But it cannot answer the questions that funders, distributors, and cultural policy researchers most often need to ask.

Those questions are comparative: not "how did audiences receive this film?" but "how did this audience receive it differently from that audience?" A national broadcaster deciding whether to acquire a Portuguese-language drama needs to know whether young domestic audiences engage with its moral logic differently from the global reviewing public. A festival programmer needs to know whether the audiences they are targeting receive the film eudaimonically—as meaningful—when the public record is predominantly hedonic. The gap between profiles is where the actionable information lives.

CineFlux calls this gap divergence. It is computed by running two separate audience cohorts — a public baseline and a segmented cohort — through the same annotation pipeline, and then measuring, variable by variable, how their profiles differ.

Public baseline
Source: Organic reviews from Letterboxd (or equivalent platform)
Who: Self-selected public reviewers — viewers who chose to watch the film and chose to write about it
Character: Ambient, uncontrolled, large-volume. Represents the general reviewing public rather than any defined segment
Role: The baseline against which segment reception is measured
Segmented cohort
Source: Recruited participants in structured focus-group sessions
Who: Defined audience segment — in the CRESCINE study: Portuguese undergraduate students, 18–24 years old, gender-balanced, across 10 sessions (≈ 80 participants total)
Character: Controlled, demographically specified, smaller volume. Represents a target segment of interest
Role: The cohort whose divergence from the baseline is the analytical object

The focal film for the CRESCINE divergence analysis is Amelia's Children (Semente do Mal, dir. Isaac Barajas, 2023), a Portuguese-language supernatural drama. It has a public Letterboxd corpus and a full set of focus-group sessions conducted in Portugal. It is the film for which both cohorts are available, and therefore the film for which divergence can be computed.

The paraphrase bridge — making focus-group speech comparable

A critical methodological challenge stands between focus-group data and a CineFlux profile: focus-group participants speak in turns, not in reviews. Their discourse is conversational, fragmented, responsive to other participants. A Letterboxd review is a composed, unitary expression of one person's reception. These are structurally incompatible — unless a controlled transformation makes them compatible.

CineFlux addresses this through a three-step conversion pipeline that distils each participant's contribution into a review-format text without adding any interpretive content:

01
Turn extraction
Each participant's spoken turns are identified and extracted from the full session transcript. Other participants' turns are set aside. The result is a per-participant turn sequence: everything one individual said across the entire session.
02
Semantic folding
The per-participant turn sequence is reorganised into coherent semantic units — thematic blocks that group together what the participant said about the same topic, regardless of when in the session they said it. Conversational fragments that belong to the same idea are folded together. This preserves the participant's full range of responses while removing the turn-taking structure of conversation.
03
Paraphrase (no inference)
GPT-4o is used as a paraphraser — not an analyst — to rewrite the semantic units as continuous review-format prose. The strict constraint: nothing may be added, inferred, or amplified. The paraphrase must faithfully represent only what the participant expressed. The result is a text that can be processed by the TAS annotation pipeline on the same terms as a Letterboxd review.
The paraphrase step is a bridge, not an analysis. GPT-4o converts spoken register to written register and conversational fragments to continuous prose — but it does not interpret, expand, or re-characterise what the participant said. The subsequent TAS annotation is performed on the paraphrased text, not on the original transcript, ensuring that the extraction stage applies the same instrument to both cohorts.

Divergence, defined

Once both cohorts have been annotated and their variable-level statistics computed, divergence is computed as two separate percentage-point differences per variable — one for positive-stance engagement, one for negative-stance engagement. For each variable y, define the positive-stance rate and negative-stance rate for a cohort as the proportion of that cohort's reviews in which the variable activated with positive or negative stance respectively:

pos_rateC,y = n_posC,y / nC ∈ [0, 1] neg_rateC,y = n_negC,y / nC ∈ [0, 1]

Divergence is then the difference between the segmented cohort and the public baseline on each of these rates:

Δ+y = pos_rateseg,y − pos_ratepub,y ∈ [−1, +1] Δ−y = neg_rateseg,y − neg_ratepub,y ∈ [−1, +1]

Both differences are displayed as bars on the same centred axis — blue for Δ+, red for Δ−. A bar extending right means the segment exceeds the public on that measure; left means the segment is below. Bars are scaled to a fixed maximum of ±50 percentage points.

Separating the two rates rather than collapsing them into a single mean-activation difference is a deliberate design choice. A variable can show a large positive blue bar and a large positive red bar simultaneously — meaning the segment engages with that variable more than the public does, but splits internally between positive and negative stance. A single activation difference would report moderate divergence and miss the polarisation entirely. Keeping Δ+ and Δ− independent makes that pattern visible.

The schematic below illustrates the reading convention with hypothetical values:

Divergence indicator format — schematic illustration (hypothetical values)
Blue bar = Δ+ (positive-stance rate: segment − public) Red bar = Δ− (negative-stance rate: segment − public) Right = segment higher · Left = segment lower · Scale: ±50 pp
Moral Foundations
Care–harm
Δ+ +0.34
Δ− +0.08
Equality–oppression
Δ+ −0.12
Δ− +0.26
Authority–subversion
Δ+ +0.18
Δ− +0.20
Hedonic MO
Humour–amusement
Δ+ −0.30
Δ− −0.14
Thrill–arousal
Δ+ +0.22
Δ− +0.06
Eudaimonic MO
Reflection–rumination
Δ+ +0.42
Δ− +0.10
Meaning–appreciation
Δ+ +0.28
Δ− +0.04

Hypothetical values for illustration only. Δ+ = pos_rate(seg) − pos_rate(pub); Δ− = neg_rate(seg) − neg_rate(pub). All values are percentage-point differences (S1 − public). Bars clamped to ±50 pp. Derived from trigger → appraisal → stance annotations.

Reading this schematic: authority–subversion shows two moderate bars of similar size extending right — the segment is more engaged with this variable than the public, but its engagement is internally divided between positive and negative stance. A single activation-difference measure would report modest divergence and miss the polarisation. Equality–oppression shows a leftward blue bar and a rightward red bar — the segment is less likely than the public to engage this variable with positive stance, but more likely to engage it with negative stance. That pattern reveals a directional shift, not just a level shift in overall engagement.

The reception map

The divergence indicators give you the variable-by-variable gap. But they do not show you the structure of individual reception — who agrees with whom, whether the two cohorts form distinct clusters, or whether there is a spread of responses that a mean conceals. For this, CineFlux produces a reception map: a two-dimensional projection of all reviews in the fourteen-dimensional reception space.

The projection (computed using MDS or UMAP dimensionality reduction) places each review as a point in 2D space such that reviews with similar psychological fingerprints appear nearby, and reviews with different fingerprints appear farther apart. The visual encoding carries four simultaneous pieces of information:

Circle colour
Indicates audience source: public baseline (sage) versus segmented cohort (mauve). This is the primary visual separation — if the two cohorts cluster apart, it means their reception fingerprints are systematically different.
Circle size
Encodes user rating: larger circles = higher scores. A tight cluster of large circles from one source means that cohort rated the film highly and agreed on its reception fingerprint.
Circle stroke
Encodes net valence: green-toned border = net positive signed orientation across variables; red-toned = net negative; grey = mixed or weak stance. A cluster of positive-stroke circles from the segment cohort means they engaged with the film approvingly across multiple dimensions.
Proximity
Indicates reception similarity: two reviews close together on the map processed the film through similar psychological pathways, regardless of which cohort they belong to. Public and segment reviews that intermix on the map share more with each other than with reviews from their own source that plot elsewhere.

The map does not replace the variable-level profile — it adds a structural layer. A bifurcated map, with two distinct clusters one for each source, tells a different story from an intermixed map where sources are spread across the same space. The former signals systematic divergence; the latter signals that the segment and the public are processing the film in similar ways despite their demographic differences.

The three-layer story spine

CineFlux's tool is organised around a structured analytical sequence that the deliverable calls the three-layer story spine (§8.1.2). It is the interpretive logic that connects the raw outputs to a fundable claim.

1
Layer one — Overview
What does the landscape of reception look like?
The reception map and the public-baseline profile answer this first. How spread out are the reviews? Are there identifiable clusters? Does the public corpus show high or low activation across dimensions? Does the map show internal disagreement within the public — sub-audiences responding to entirely different aspects of the film? This layer establishes the baseline landscape before the segment is introduced.
2
Layer two — Contrast
How does the target segment diverge from the public baseline?
The divergence indicators (Block B of the decision lens) answer this. Which variables are more active in the segment? Which are less active? Does the segment's orientation on contested variables differ from the public's? Does the map show the segment clustering distinctly, or does it intermix with the public? This layer makes the gap quantitative and specific — not "the segment responded differently" but "the segment activated reflection–rumination 0.71 points above the public baseline and engaged humour–amusement 0.61 points below it."
3
Layer three — Consequence
What does the divergence imply for funding and circulation decisions?
This is the operational layer. If the segment engages eudaimonically more than the public does, the film may have public-value potential that the Letterboxd record does not reflect. If the segment activates moral foundations strongly but the public does not, there may be a cultural-framing argument to be made about the film's relevance to that demographic. Conversely, if the segment shows markedly lower activation overall, the assumption that the film will resonate with this audience needs to be scrutinised. The divergence profile does not make the decision — it structures the evidence that informs it.

What divergence enables

The deliverable identifies three specific decision-making functions that divergence analysis supports (§8.1.6):

The first is diagnosing interpretive fit and friction. A high-divergence profile — where the segment differs sharply from the public across several variables — signals that the film's psychological content lands differently for this audience. Whether that difference represents an opportunity (the segment engages in a particularly valuable way) or a friction (the segment disengages where the public engages) depends on the direction of the divergence bars. The diagnosis is the starting point for any targeted distribution or marketing strategy.

The second is making segment divergence operational. Traditional audience segmentation often produces findings like "younger audiences responded more positively." CineFlux replaces the valence comparison with a reception-specific one: "younger audiences activated the eudaimonic dimension 0.4 points above the public baseline but activated moral foundations 0.2 points below it." The second statement is actionable in ways the first is not — it tells you which psychological pathway the film is taking for this audience, which is the information you need to make claims about cultural value or public benefit.

The third is linking divergence to realistic interventions. The combination of which dimension diverges and in which direction determines what kind of intervention is plausible. If a segment activates moral logic but appraises it negatively, that is a different situation from a segment that does not activate moral logic at all. In the first case, the film is registering — but generating resistance. In the second, it is not reaching the segment's moral framework at all. These two cases call for entirely different responses, and the divergence profile is what distinguishes them.

Divergence analysis is not designed to select films that score well with a target audience. It is designed to characterise, precisely, how a target audience's reception differs from the ambient public — and to make that characterisation concrete enough to support a funding or programming argument. A high divergence score is not better or worse than a low one: it is a different kind of information. The question is always what the gap means in the specific context of the film, the audience, and the decision being made.

Evidence coverage and interpretive reliability

A final component of the tool's decision lens (Block C, §8.1.5) addresses a question that the divergence indicators alone cannot answer: how much confidence should a researcher place in what they are seeing? Three checks govern this:

Coverage is the proportion of reviews in a cohort that triggered a given variable at all. A divergence bar for a variable triggered in only two of thirty reviews is less trustworthy than one triggered in twenty-five. Low-coverage variables are flagged in the tool's output.

Balance examines whether the stance distribution across a cohort is sufficiently spread to support an orientation claim. A variable activated in thirty reviews, all with positive stance, is highly consistent — but a consistent negative-only distribution may reflect an annotation artefact rather than a genuine consensus. The balance check flags unusual distributions for researcher attention.

Sparsity flags are raised when a cohort is too small — or a variable too rarely triggered — for the computed statistics to be stable. A cohort of eight reviews is enough to produce numbers, but the confidence interval around those numbers is wide. CineFlux surfaces these flags alongside the divergence indicators so that interpretive claims are calibrated to the actual evidential base.

Divergence analysis requires two cohorts, adequate coverage in both, and the same annotation pipeline applied to both. When those conditions are met, the resulting comparison is the most psychologically specific form of audience segmentation currently available for film reception research. When they are not fully met — a sparse segment cohort, a variable triggered in fewer than five reviews — the tool says so. The confidence checks are not a limitation; they are the tool being honest about the evidential floor beneath its claims.
Back to map
Block 8

What CineFlux produces

Everything described in Blocks 1–7 resolves into two concrete artifacts: a JSON data file and an interactive HTML tool that reads it. This block is a field-by-field guide to both — what each part of the output contains, where to find it, and what it means in terms of the framework you now understand.

The two artifacts

CineFlux's pipeline writes its output to a single JSON file (by default cineflux-root-clean-v3.json). This file is the complete, machine-readable record of every annotation decision, every derived score, and every pre-computed aggregate for every film and cohort in the dataset. It is the ground truth: everything the tool displays is read from it at runtime.

The tool itself is a self-contained HTML page (cineflux-mvp.html) that loads the JSON, renders the reception map, and populates the decision lens. Nothing is hard-coded in the tool — every number, bar, and badge is derived from the JSON on load. If you need to interrogate the data directly or export it for further analysis, the JSON is the file to work with.

The JSON output — root structure

The root object has one key:

FieldTypeContents
films object One key per film, using a stable film identifier string (e.g. "amelia_children"). Each value is the complete data object for that film.

Per-film object

FieldTypeContents
metadata
metadata.title_en string English-language title, used for display in the film selector and map header.
metadata.title string Original-language title. Fallback if title_en is absent.
metadata.release_year number Four-digit release year, shown in film meta strip.
metadata.genres array | string Genre tags. Displayed as a comma-separated list in the film meta strip.
metadata.production_countries array | string Production country codes or names. Displayed in the film meta strip.
reviews
reviews.public.items array Array of public-source review objects (e.g. Letterboxd). Each item is one annotated review.
reviews.segmented.S1.items array Array of segmented-cohort review objects. S1 is the segment identifier for the CRESCINE focus-group cohort. Absent if no segmented data is available for this film.
ui_aggregates
ui_aggregates.cohort_rates_v1 object Pre-computed cohort-level statistics used by the decision lens. Contains a cohorts sub-object with keys public, combined, and segmented.S1 (when available), each holding per-dimension rate arrays.

Per-review object

Each item in reviews.public.items or reviews.segmented.S1.items has the following structure:

FieldTypeContents
review_id string Stable identifier for this review across pipeline runs.
user_rating integer 1–5 Reviewer's star rating, normalised to a 1–5 scale. Maps to circle radius on the reception map (1 = smallest, 5 = largest).
derived.combined — map-level derived data
derived.combined.viz_id string Unique identifier used by the map to track individual circles across filter changes.
derived.combined.viz_ready boolean Pipeline flag: true means this review passed all quality checks and can be rendered. Reviews with false are silently excluded from the map.
derived.combined.source string "public" or "segmented". Determines circle fill colour on the map (grey-blue for public, amber for segmented).
derived.combined.segment_id string | null Segment identifier for segmented reviews ("S1"). null for public reviews.
derived.combined.normalised_projection array [x, y] Two-dimensional coordinates in the reception map's projection space. Computed by dimensionality reduction (MDS / UMAP) over the full 14-variable activation vector. These are the anchor positions around which the force simulation settles each circle.
derived.combined.group_compass string Dimension-level classification for this review, formatted as "MF:HIGH Hed:MED Eud:LOW". Each dimension receives a HIGH / MED / LOW label based on its normalised activation score relative to the cohort. Shown in the hover tooltip on the map.
derived.combined.polarity_summary.class string Net stance classification for the whole review: "positive", "negative", or "neutral".
derived.combined.polarity_summary.stance_valence float −1 to +1 Net signed orientation averaged across all activated variables. Determines circle stroke colour: blue (positive), red (negative), dark grey (neutral). The threshold for "neutral" is |valence| < 0.20.
derived.combined.polarity_summary.stance_strength float 0 to 1 Mean absolute signed orientation across activated variables — how strongly the review takes directional stances, regardless of direction. Determines circle stroke width: no stroke (<0.20), thin (0.20–0.50), thick (>0.50).
derived.combined.coverage_summary.n_active integer 0–14 Number of variables (out of 14) that activated in this review. Shown in the tooltip as "X / 14 variables activated."
derived.mf / derived.mo_hedonic / derived.mo_eudaimonic — per-variable scores
derived.mf.variable_order array of strings The six MF variable names in canonical order: care_harm, equality_oppression, proportionality_cheating, loyalty_betrayal, authority_subversion, sanctity_purity.
derived.mf.activation array of float [0,1] Per-variable activation scores αr,y for this review, in the same order as variable_order. Computed by the TAS formula: α = 0.2·t + 0.4·a + 0.4·δ.
derived.mf.signed array of float [−1,+1] Per-variable signed orientation scores σr,y for this review. Each value is the activation score multiplied by the stance sign (+1, 0, −1).
derived.mo_hedonic same structure as mf Variables in order: humour_amusement, thrill_arousal, comfort_mood_regulation, hedonic_generic (displayed as "hedonic (other)").
derived.mo_eudaimonic same structure as mf Variables in order: reflection_rumination, insight_illumination, meaning_appreciation, eudaimonic_generic (displayed as "eudaimonic (other)").

Pre-computed cohort aggregates

The ui_aggregates.cohort_rates_v1.cohorts object contains pre-computed rate arrays for each cohort and dimension. These are used by the Profiles and Divergence tabs in the decision lens; the tool computes the same statistics on the fly from the selected review subset when the user applies filters or makes a manual selection.

FieldTypeContents
variable_order array Variable names for this dimension in canonical order.
n_reviews integer Number of reviews in this cohort.
n_active array of integer Per-variable count of reviews in which the variable activated (α > 0).
n_pos / n_neg array of integer Per-variable count of reviews with positive / negative signed orientation. These are the numerators of the pos_rate and neg_rate arrays used to compute divergence.
active_rate array of float [0,1] Per-variable activation rate = n_active / n_reviews. Used by the Reliability tab's coverage checks.
pos_rate / neg_rate array of float [0,1] Per-variable positive / negative-stance rates. The divergence indicators (Block B) are computed as Δ+ = pos_rate(S1) − pos_rate(public) and Δ− = neg_rate(S1) − neg_rate(public).
mean_activation array of float [0,1] Per-variable mean activation āC,y.
mean_signed array of float [−1,+1] Per-variable mean signed orientation σ̄C,y.

The tool interface

The HTML tool has two panels separated by a fixed vertical split. The left panel is the reception map; the right panel is the decision lens.

Left panel — Reception map
What it shows: Every review as a circle in a 2D projection of the 14-dimensional reception space. Reviews with similar psychological fingerprints plot nearby.
Circle fill: Grey-blue = public cohort · Amber = segmented cohort (S1).
Circle size: User rating 1–5 (small to large).
Circle stroke colour: Blue = net positive stance · Red = net negative · Dark grey = neutral. Threshold: |valence| < 0.20.
Circle stroke width: No stroke = weak stance · Thin = moderate · Thick = strong. Thresholds: 0.20 and 0.50.
Hover tooltip: Group compass (MF/Hed/Eud level), top active variable per dimension, coverage count, stance class and valence.
Click: Adds / removes a review from the selection. The decision lens updates to reflect the selected subset.
Right panel — Decision lens
Header: Selected N / Visible N / Total N · active source and rating filters · current tab and scope.
Three tabs: Divergence (Block B) · Profiles (Block A) · Reliability (Block C). The tool defaults to Divergence when both sources are visible, Profiles otherwise.
Scope: When no reviews are selected, all visible reviews are used. When reviews are selected (by clicking circles), the lens shows statistics for the selection only.
All tabs update live when filters or selection change.

Query controls

Above the map, four controls determine what is shown:

ControlOptionsEffect
Film dropdown Selects which film to display. Populated from the films keys in the JSON. Changing film resets the map and clears all selections.
Source Both · Public only · Segmented (S1) only Filters which audience source is shown on the map and used in the decision lens. "Both" is only available for films that have S1 data; films without S1 data default to "Public only" and lock the control.
Ratings All · Low (1–2) · Neutral (3) · High (4–5) Restricts the visible reviews to the selected rating band. Useful for comparing how high-rating and low-rating audiences differ in their psychological engagement.
Selection All visible · None · click circles "All visible" selects everything currently on the map. "None" clears the selection. Individual circles can be added or removed by clicking. The decision lens always operates on the active set (selection if non-empty, visible otherwise).

The decision lens — three tabs

Divergence
Profiles
Reliability

Divergence (Block B) is the default tab when both cohorts are visible. For each variable, it shows two percentage-point bars: the blue Δ+ bar (difference in positive- stance rate, S1 minus public) and the red Δ− bar (difference in negative-stance rate). Variables are sorted by combined divergence magnitude (|Δ+| + |Δ−|), so the most divergent variables appear at the top. Only the top 8 are shown per dimension. Bars are scaled to a fixed ±50 percentage-point maximum. When divergence data is unavailable (no S1 for the selected film), the tab shows a clear message rather than empty space.

Divergence
Profiles
Reliability

Profiles (Block A) shows the per-variable stance profile for one cohort at a time. Dimension subtabs (MF · MO-hedonic · MO-eudaimonic) switch between the three families. For each variable, a divergent microbar shows the positive-stance rate extending right and the negative-stance rate extending left. Variables that fall below evidence thresholds — fewer than 3 activations, or an active rate below 10% — are hidden by default. Ticking "Show thin evidence" reveals them. When both sources are available, a cohort subtab (Public · Segment S1) switches which profile is shown.

Divergence
Profiles
Reliability

Reliability (Block C) shows evidence-quality badges — green (ok) or red (bad) — for each dimension in each cohort. Three checks are reported:

Coverage: the mean activation rate across variables in the dimension. Below 10% mean activation rate, the dimension is flagged as "thin" — interpretive claims about that dimension should be treated with caution.

Balance: the proportion of directional activations (those with a positive or negative stance) that are positive. If more than 85% of all directional stances in a dimension point the same way, the dimension is flagged as "one-sided" — the apparent orientation may reflect a data imbalance rather than genuine consensus.

Sparsity: the count of individual variables whose activation rate falls below 10%. A dimension with many sparse variables may appear to have a stable profile when in practice most of that profile is built from a handful of activations per variable.

The Reliability tab does not tell you whether the findings are interesting or important. It tells you whether the evidence base is thick enough to support the interpretive claims the other two tabs invite you to make. A strong divergence bar on a variable with two activations is not the same as a strong divergence bar on a variable with thirty. The badges make that difference explicit rather than hiding it in a confidence interval.

What the tool does not expose

Two things are in the JSON but not surfaced in the current tool interface. First, the full TAS annotation rationale for each review — the per-variable trigger, appraisal, and stance values, and any reasoning text the pipeline generated — is stored in the JSON but not displayed in the MVP. Accessing it requires reading the JSON directly. Second, the group compass classification for individual reviews (MF:HIGH Hed:MED Eud:LOW) appears in the hover tooltip but is not used to sort, filter, or aggregate in the current interface. Both are available in the data and available for future development.

Back to map
Block 9

What CineFlux
cannot do

The most dangerous instrument is one that does not know — or does not say — where it stops. CineFlux names its own limits clearly. This block is that naming.

Everything in this document has described what CineFlux does, how it works, and what its output means. This block inverts that. It describes the things CineFlux does not do, cannot do in its current form, and should not be asked to do. These are not apologetic footnotes. They are design decisions made explicit, and they matter for how you use the tool.

1. The data is a proof of concept, not a market portrait

The current MVP corpus is deliberately narrow. Films are drawn from two countries: Portugal and Denmark. The only segmented cohort is Portuguese students aged 18–24, recruited through a focus group study using Amelia's Children (dir. Gabriel Abrantes, 2023) as the focal film. Public reviews are drawn from a curated set of international platforms.

Every number, every bar, every divergence score in the current tool reflects a specific, bounded sample. It does not represent all European small-market audiences, all Portuguese audiences, or even all young Portuguese audiences. The patterns are real. The scope is narrow.

The word "synthetic" in the JSON's segment identifier reflects this status. The corpus was constructed to build and test the pipeline, not to make production-grade claims about market positioning. Results from the current MVP should be treated as methodological demonstrations — proof that the pipeline works and that the signals it extracts are interpretable — not as actionable intelligence about actual audience segments at scale.

2. Two dimensions, not five

CineFlux's conceptual model defines five dimensions of audience reception. The MVP operationalises two: Moral Foundations (story world) and Motivational Orientation (hedonic and eudaimonic). The remaining three dimensions — covering ideological selectivity, cultural positioning, and cognitive-style differences — are theoretically defined in the research but not yet implemented in the tool.

The reason for the deferral is not that these dimensions are less important. It is that validating a computational extraction requires empirical benchmarks — existing data against which the system's outputs can be checked. The MF and MO dimensions had the richest available validation resources; the others did not yet. The architecture is designed to extend, but until the remaining dimensions are validated and integrated, a substantial part of the full reception picture is absent.

When the tool shows you a complete profile across all fourteen variables, it is showing you the full current model — not the full psychology of audience response. Block 2 of this document flags this explicitly: the fourteen variables are those that are theoretically grounded, linguistically detectable, and amenable to empirical validation now. The model is designed to grow.

3. The cohort query is limited to three metadata fields

The tool's Query Controls allow filtering by three fields: production country, genre, and release year. These are the fields for which data is consistently available across the current corpus.

Many practically useful queries are not currently possible: filtering by director, by specific production company, by festival circuit trajectory (Cannes selection vs. IDFA vs. no festival distribution), by funding source, by WP3 market-cluster indicators, or by any of the distributional and co-production descriptors that characterise small-market films in the most analytically interesting ways. These fields exist in the research ecosystem — they are not yet in the data pipeline that feeds the tool. Integrating CRESCINE's WP2 and WP3 descriptors into the query layer is one of the primary items on the development roadmap.

4. The pipeline depends on a commercial LLM and has not been formally validated

The CineFlux annotation pipeline runs on a commercial large language model (GPT-4o). This is a pragmatic MVP choice: the model provides the controllability that the multi-gate structured extraction requires. It is not a long-term solution. Commercial API access is subject to pricing changes, deprecation, and terms-of-service shifts. It is not aligned with European commitments to open infrastructure and data sovereignty.

Three quantitative properties of the pipeline have not been put through formal validation beyond initial reliability work:

Additionally, although the pipeline has processed text in multiple languages, cross-lingual consistency has not been systematically validated. Prompt design and conservative saturation procedures were used to encourage consistent behaviour across languages, but formal cross-lingual comparison is still pending.

5. Moral Foundations measures reception — not people

This deserves unusually direct language, because the misreading has consequences. The MF dimension in CineFlux tracks how reviewers discuss the moral texture of the stories they watched. It does not measure, and must not be used to infer, the stable moral values or character of the reviewers themselves.

A reviewer who activates equality–oppression with a negative stance is not being identified as someone who opposes equality. They are reporting that the story depicted something they read as oppression. The variable measures a response to a narrative event, not a trait of the person responding.

This matters most when divergence scores are large. If a segmented cohort shows systematically different MF activation patterns from the public cohort, the correct interpretation is: this group reads the film's moral fabric differently. The incorrect interpretation — and the ethically problematic one — is: this group has a different moral character. CineFlux models how works are received and discussed. It does not profile individuals, and it does not infer stable moral identities.

Where the model goes next

The deliverable's §9.2 sets out the development roadmap. Five directions are named:

Expand the model Implement the three remaining CineFlux dimensions — ideological selectivity, cultural positioning, and cognitive-style differences — as validation resources become available.
Expand the data Scale to more films, countries, and audience segments. Conduct new focus groups with strategically selected cohorts. Widen the public-review corpus to cover additional small European markets beyond Portugal and Denmark.
Integrate CRESCINE descriptors Connect CineFlux reception profiles to the WP2 film-level descriptors and WP3 market and value-chain indicators, enabling cohort queries that go beyond the three current metadata fields.
Reduce LLM dependency Investigate open-weight models and European-hosted infrastructure as alternatives to commercial APIs, aligning the pipeline with open science and data-sovereignty commitments.
Extend the interface Build a conversational CineFlux platform — with informed consent and dashboarding capabilities — that allows audience experts to query the data through natural language rather than manual filter controls.
The limits documented here are not the tool's final condition. They are its current condition — made visible so that the evidence you draw from the MVP is evidence you can stand behind. A tool that names what it cannot do is a tool you can trust within the scope of what it can.
Back to map