Generative AI: The Fabricated Solution to Life Under Late-Stage Capitalism

The use of generative AI in schools and workplaces is on the rise. People use generative AI platforms such as ChatGPT for many things, including writing essays or emails, editing their work, and doing research (Gibbs). Corporations have encouraged the population to embrace AI as a tool to make their lives easier (Mayer et al.). However, AI is causing more harm than good, both on an individual level through cognitive atrophy (Roxin), and on a larger scale through environmental degradation (Zewe) and the destruction of community ties that underlie social responsibility (Roxin; Vilcarino and Langreo). These negative impacts will not dissuade billionaire elites from funding and developing AI; in fact, this destruction works in their favour (Klein and Taylor). This paper aims to examine why the use of generative AI is encouraged among the general population by corrupt corporations and billionaire elites, and why, knowing the negative impacts of AI, large portions of the population continue to use it on a daily basis. I argue the continued use of AI is encouraged because it reinforces capitalist power structures and the imbalance of wealth. Furthermore, people choose to use generative AI because of the strain that late-stage capitalism, which prioritizes an increased speed of production and profits over
people, has put on the average person’s life.
The rising use of generative AI in schools and the workplace
Since being introduced to the mainstream population, generative AI has been largely embraced by the public, specifically in schools and the workplace. Naveen Kumar states that as of 2025, “86% of students in schools and higher education utilize AI” globally. Concerningly, students are not the only ones using this technology; Jennifer Vilcarino and Lauraine Langreo state that according to the nonprofit Center for Democracy and Technology, “[e]ighty-five percent of teachers and 86% of students used AI in the 2024-25 school year.” Based on these statistics, the use of generative AI is prevalent throughout all levels of education. The picture this paints is bleak; AI is reading, writing and editing papers for students, and AI is grading those papers for teachers. Where does human thought and interaction come into this purely robotic exchange?
Similarly, billionaire corporations have begun embracing the technology. Araz Zirar et al. describe “Workplace Artificial Intelligence” which “helps organizations increase operational efficiency, enable faster-informed decisions, and innovate products and services.” Employers are implementing this technology intending to boost efficiency, and thereby profit, and employees are gladly cooperating. Hannah Mayer et al. find that “employees are more ready for AI than their leaders imagine” as they “are already using AI on a regular basis.” They explain that among employees working white collar jobs, there is a real fear that AI will replace them. Rather than dissuading them from using the tool, this fear encourages them to learn how to use it and not be left behind. It is difficult to avoid AI even for workers and students who morally oppose its use, as the feature is automatically implemented into the Google search engine and other platforms, meaning the true usage is likely higher than the statistics about intentional use show.
How the negative effects caused by generative AI use work in favour of the billionaire elites
While the individual reasoning of AI use varies, students’ reliance on generative AI may indicate a lack of interest, or an increasing indifference towards the process of learning itself, as using the tool allows them to avoid exercising critical thinking, reading, and writing skills. Unfortunately, this desire not to do the work may eventually become an inability to do the work. Ioan Roxin states a study carried out by MIT finds that “widespread use of this AI carries the risk of overall cognitive atrophy and loss of brain plasticity” by causing “reduced cognitive engagement” and impacting one’s ability “to transform information into knowledge.” Researchers found that when writing an essay with the help of AI, participants’ memory formation and the storing of knowledge were impacted, as “83% of AI users were unable to remember a passage they had just written.” Andrew Chow describes the same study, stating this lack of memory is caused by the “bypassing of deep memory processes” that occurs through the use of AI, and that those who wrote the essay without the technology “showed the highest neural connectivity… associated with creativity ideation, memory load, and semantic processing.” Another study found that the use of Large Language Models, a form of advanced AI, increases the risk of “cognitive decline” by putting people into “a cumulative ‘cognitive debt’” by automating processes originally done by the prefrontal cortex, preventing the brain from improving these processes (Roxin). These findings indicate long-term cognitive effects caused by the continued use of generative AI, which reduces learning and threatens the effectiveness of the education system.
Regardless of these negative cognitive effects, some of the wealthiest individuals and corporations are pushing for the continued funding and development of generative AI. The financial benefits are astronomical to those in power; OpenAI, the company that owns ChatGPT, “reached $10 billion in annual recurring revenue (ARR) as of June 2025,” revenue coming from “consumer products, ChatGPT business products, and API services” (Singh). The revenue growth caused by corporate use of AI is estimated to be $4.4 trillion, and because of this, “92 percent of companies plan to increase their AI investments” (Mayer et al). For example, Elon Musk’s company, xAI, received $12.1 billion in funding alone (Shrivastava). Clearly, those with financial influence see the financial reward as worth the intellectual and other risks posed by AI, because AI is becoming central to the system of capitalism itself. Nigel Walton and Bhabani Shankar Nayak state AI legitimates the power of the capitalist system by “creating the inhuman power of big data which is a weapon of capital that controls the everyday lives of labour” (1). In this way, AI not only legitimates its own importance within the capitalist system, but also legitimates the capitalist structure itself by increasing economic growth for billionaire elites while simultaneously enabling compliance among consumer populations through social control.
Jonathan Crary’s discussions about social media in his book Scorched Earth are relevant here. Crary argues “the wealth and power of the billionaire class are structurally interconnected with key elements of the internet complex” (82), meaning the complex system of the internet contributes to the generation of their wealth. Because of this, wealthy elites are motivated to normalize the use of technology and social media by making people believe they cannot live without it. Crary states, “the primary goal of the most powerful stakeholders is the eventual transformation of everyone into captive and obedient consumers of their products and service,” and because of this, “[t]he loudest voices declaring this impossibility are those who benefit from the perpetuation of the… uninterrupted functioning of a capitalist world” (18-19, 3). While Crary is discussing social media, these statements can be extended and applied to generative AI; the tool takes the idea of the captive, obedient consumer further by not only discouraging the critical thinking process, but contracting it to the forces of capital. As the technology financially benefits mega corporations and the figureheads running them, its development and use will be pushed regardless of the consequences. Once AI has been pushed hard enough and ingrained into people’s and companies’ daily lives and operations, the voices calling for the regulation or even banning of AI will be called ridiculous for making impossible demands.
Beyond the negative impacts on individuals, the prioritization of AI-related profit has irreversible environmental impacts. The increased electricity demands required to run data centers, which consume around “‘seven or eight times more energy than a typical computing workload,’” require heavy operation of power plants that run off fossil fuels (Zewe). These data centers also require large amounts of cooled freshwater to regulate temperature and absorb the heat coming off equipment, and water scarcity is already becoming an issue due to global warming. These practices are not sustainable, which generally does not concern the upper-class, who have the wealth and privilege to pay their way out of dealing with the impacts of climate change. We already know U.S. President Donald Trump does not care how his actions affect the environment if it lines his pockets; his statement that “[w]e will drill baby, drill!” in reference to destructive oil extraction practices conveyed that quite clearly (Igini). Unfortunately, short-term gain is favoured over long-term survival by those in power.
It is difficult to understand why wealthy elites would prioritize profits over people and the environment when eventually, they too will be negatively affected by this destruction. However, in their article, “The rise of end times fascism,” Naomi Klein and Astra Taylor describe the emergence of “end-times fascism,” which refers to the encouragement of the destruction of the world under fascistic societies. Klein and Taylor state, “the most powerful people in the world are preparing for the end of the world, an end they themselves are frenetically accelerating.” This is fueled by a sense of “supremacist survivalism,” in which the wealthy elites, largely those who are white and conservative, believe themselves to be superior and thus worthy of surviving the apocalypse, rapture-style. In this way, the elites are not only apathetic towards the environmental destruction caused by AI, but are actively embracing it. Klein and Taylor argue the elites are “slashing not only environmental regulations but entire regulatory agencies, with the apparent end goal of replacing federal workers with chatbots.” The fantasy of replacing real human beings that the elites deem as below them, decided by “racial, ableist, and gender biases about which parts of humanity are worth enhancing and saving,” is obviously problematic, but made more possible because AI offers a replacement for many workers. Crary also briefly explores this idea, arguing “capitalism’s long-term prospects depend on a drastic reduction in world population (i.e., the deaths of a few billion human beings)” which is needed to prevent “social unrest, resource scarcity, and other instabilities” present under late-stage capitalism (26). This supports Klein and Taylor’s argument, and reinforces how capitalism feeds off the destruction and eventual loss of lives caused by AI’s harmful environmental effects. In this way, the coming environmental destruction, which will largely impact marginalized, lower-class people and those in developing countries, and is exacerbated by the continued use and development of AI, may indeed be the goal of the billionaires funding the project.
Beyond environmental destruction, the use of generative AI has catastrophic long-term individual effects on people and their psyche, which works in favour of the individualism that supports neoliberalism. Neoliberalism is an ideology that promotes individualism and profit over other values (Vallier), which has contributed to the negative effects of late-stage capitalism. The use of AI leads to increased feelings of social isolation (Roxin) and individualism, as people feel they can turn to AI for the things they would previously have turned to other people for help with. For example, Vilcarino and Langreo found that in school, AI harms students’ ability to “develop meaningful relationships with teachers” and contributes to “[a] decrease in peer-to-peer connections.” No longer will students do their homework together and help each other with problems they do not understand, or form mentorship connections with teachers, if ChatGPT can tell them the answers to their questions and do the things a teacher can do. There will be no need to ask a friend for advice or consult a professional for help if ChatGPT is there to do it all for you.
This replacement of social community with robots is part of the neoliberal capitalist mindset of the dog-eat-dog world, which prioritizes individual output and profit over social ties. This effect is not exclusive to AI use, but also arises out of the internet complex; Crary states “[t]he internet overwhelmingly produces self-interested subjectivities incapable of imagining goals or outcomes other than private, individual ones” (14). It leads to “‘narcissistic apathy’ of individuals emptied of desire for community” by naturalizing “how our needs, desires, and affections are diverted or severed from a commitment to care for a world lived in common with others” (7, 26). In this way, the use of generative AI exacerbates an already existing problem, pushing us further away from the kinds of human connection and interaction needed to care about our world and the people in it.
This encouragement of individualism, in combination with AI’s attack on critical thinking and learning, benefits the elite upper-class by minimizing dissent among people. Klein and Taylor argue that the more people know about the issue of end-times fascism, “the more they will be willing to fight back.” However, this push for individualist productivity at the expense of social connection and the formation of community relationships caused by reliance on AI works against social mobilization and erodes social responsibility. People are not connected enough to form a social movement of dissent, and do not care to as they no longer feel the sense of social responsibility to other people in their community or on this earth. Furthermore, it provides people with a distraction, either through the AI-generated slop being circulated on social media, or through people’s own use of the technology. Crary argues the internet complex already presents “[l]imitless digital diversions” which act as “a deterrent to the rise of anti-systemic mass movements” (10), and generative AI, as a part of the wider internet complex, exacerbates this issue by pushing people farther into their digital distractions and away from each other.
If AI is so bad, why are people using it?
Regardless of the catastrophic effects of generative AI use on both individual people, community ties, and the environment, people continue to use this technology. As of November 2025, the ChatGPT website received around “4.61 billion visits per month” (Duarte), and the platform had “800 million weekly active users” which is double what it was earlier this year (Singh). This amount of usage indicates an acceptance of generative AI by the general public. Not by everyone, but by enough to perpetuate the issues it causes. In order to understand how to address this issue, it is necessary to question why people continue to use this type of technology while knowing the disastrous impacts. The most logical answer is that many people likely do not know, meaning there needs to be increased education on the topic. There is not much scholarship on the public awareness on impacts of AI, but one study conducted at Queen’s University in Ontario found that many students were not informed about the negative effects of AI and had a limited understanding of the implications of its use (Smith i-ii). While this is one study and the findings cannot be applied to everyone, the fact that students pursuing higher education are not aware of the issue indicates a lack of discussion on the topic and raises questions about whether those with less education know anything about it.
The other potential cause may be a willful ignorance of the populace of these damaging effects, in which the individual benefits of AI under late-stage capitalism, caused by an increased speed of production, are deemed to outweigh the negative effects. In Scorched Earth, Crary argues that because of capitalist mindsets, people are developing “artificially manufactured appetites” which are fueled by the speed and instant gratification of the internet as well as the unlivable and insatiable “temporalities and values of an on-demand world” (2, 26). This appetite for more is not natural but manufactured by our social structure. Crary is referring to an appetite for consumption through social media, but his ideas can be applied to an appetite for AI-generated output. Under late-stage capitalism, people’s value is determined by their productivity and how much output they can create within the shortest amount of time. AI is presented as a tool of productivity, which can streamline the process of creation; essentially, as the solution to capitalist-induced pressure.
If AI is seen as the solution to late-stage capitalist burnout, or a tool to keep up with the capitalist appetite, the increasing cost of living, housing crisis, and job crisis further enhance the perceived need for its use. As middle- and lower-class people’s lives are being made harder because of the societal strains of late-stage capitalism, they feel turning to AI is the solution to their issues. In North America, the cost of living is increasing to unmanageable levels; in the United States, 67% of people are living paycheck-to-paycheck (Stambor) and in Canada, inflation is causing increased levels of debt in middle- and lower-income houses (Nguyen). There is also currently a job crisis in Ontario; Hughes states that “Canada’s unemployment rate reached nearly its highest point since 2016 as the economy shed 66,000 jobs” just in August of 2025. This sense of scarcity creates feelings of panic among people, who feel they could easily be replaced at their job if not being productive enough and then unable to find other employment. In these circumstances, it is easy to view AI’s ability to create constant generative output as the solution.
Furthermore, in specific fields such as computer science, the use of AI is becoming almost mandatory (“The Role of AI”). If one’s coworkers are using the technology to produce more efficient results, one’s failure to match or exceed this AI-generated output could cost them their job. As job security is now a privilege, boycotting the technology in these fields is not an option. In relation to the job crisis, one thing to note is that young people are the ones mainly affected (Hughes), and are also the core age group using AI, as over 45% of ChatGPT users are under 25 (Duarte). I am not claiming this link between unemployment and increased AI use among young people is causal, but it is worth noting, and the stresses caused by unemployment may explain why young people are turning to this technology as a way to cope.
Exacerbating this stress is the housing crisis Canada is currently experiencing, caused by the “financialization of housing [which] treats homes as assets rather than necessities” and prevents lower-income Canadians from entering the housing market (“Homelessness and Housing in Canada”). This rightfully creates a sense of panic among Canadian citizens, enhancing the existing fear of losing one’s job and being unable to find alternative employment. While I am not claiming there is a direct link between the housing crisis and the use of AI, the pressures of the housing crisis, in combination with the other financial issues outlined above, exacerbate feelings of hopelessness among people, which further encourages them to look for solutions to keep up with the constant demand of capitalism. Of course, this insatiable demand cannot be kept up with regardless of the tools we have, meaning that AI is not an actual solution, but merely the illusion of a solution, used by upper elites to reinforce their conditions of power. In this way, turning to AI as a solution is an example of “the folly of pursuing systemic change through the apparatuses that guarantee submission to the givens and rules imposed by those in power” described by Crary (14); it is a way for those who reinforce and benefit from the capitalist system to sell a solution to capitalist-made problems. By having the illusion of a solution which works within imposed systems of power, people are discouraged from finding an actual solution through more radical change.
Beyond this, people who are aware of the environmental impacts of using generative AI excuse their behaviour due to the sense of powerlessness that comes with having an increased awareness of the destruction of the planet. The rising threat of global warming has caused increased discussions on the issue in the media and attempts to spread public awareness in various ways. The dangers of climate change are taught to children within the education system, reported on by news outlets, and explored through narratives in popular media, creating a rising sense of environmental doom in all. While these discussions are important in increasing public awareness, they often place responsibility onto mega-corporations for their harmful actions, as unfortunately, a main cause of environmental destruction is large, wealthy corporations and their irresponsible practices. It is common to see articles emphasizing this, such as one by The Guardian which states “[j]ust 100 companies have been the source of more than 70% of the world’s greenhouse gas emissions since 1988, according to a new report” (Riley). While it is important to hold corporations accountable for their destructive actions in this way, this also creates a sense of doom and helplessness in individuals as one person cannot make enough change to create a lasting impact if corporations do not do the same. This, along with the increasing sense of individualism and narcissistic apathy neoliberalism encourages, make environmental action feel like an impossibility.
This narrative of powerlessness is encouraged as a way to assuage feelings of guilt among individual people, but is often taken too far. In one article, Elizabeth Oldfield states “[c]orporations must take accountability for their large part in the climate change crisis. They are responsible for the state our planet is in, and until they acknowledge that, no real change can be made.” This mindset takes responsibility off individuals to attempt to act in environmentally conscious ways and positions people as powerless against the looming threat. With this spreading mindset, people can easily justify their use of AI as they are not the ones at fault for global warming and cannot do anything to change it, meaning the small harms they cause through the use of AI are a drop in the bucket of wider destruction. It is easy to ignore the fact that AI data-centers were “the 11th largest electricity consumer in the world” in 2023, and are expected to to become the 5th largest energy consumer in 2026 (Zewe), or that they currently consume almost “six times more water than Denmark,” especially if you are not part of the quarter of humanity lacking clean drinking water (“AI Has an Environmental Problem”). It does not help that Donald Trump has reversed many environmental regulations through his “‘drill, baby drill’” agenda (Igini). As Klein and Taylor say, “a great many people understandably feel unable to protect themselves from the disintegration that surrounds them.” If the man leading perhaps the world’s most influential country is actively working to destroy the planet as we know it, and AI is presented as a solution to protect them from the difficulties of life under late-stage capitalism, it is easy for people to justify and overlook the individual harm they are causing.
In summation, the future seems bleak. The rise of generative AI use is linked to cognitive atrophy, the dissolving of community ties and social responsibility, and disastrous impacts on the environment. Mega-corporations and billionaire elites fund its development and promote its use because they thrive on these harms, which reinforce the social structures that uphold their power. In order to acknowledge the plethora of harms AI is causing, of which there are far more than I could touch on in this paper, it is crucial to understand that the use of AI is encouraged on a societal level, reinforced by the stress of late-stage capitalist life and rising cost of living, which eclipse the environmental destruction that most people are taught they cannot address individually anyways. Our situation calls for radical change; we can continue to call for the ethical development and regulation of AI, but it unfortunately seems there are almost no ethical uses of AI, and any calls will be ignored by those with enough wealth and power to implement ethical development. On an individual level, it is worth considering boycotting this technology from our lives in order to protect ourselves from social control. AI’s complete restriction seems unrealistically radical, and as Crary says, “any suggestion that a livable planet would necessitate a radical remaking of our lives, and a refusal of the products and services that drive the growth and wealth of mega-corporations, is unacceptable” (84). While radical collective action against the technology seems difficult, if not impossible, there are steps that can be taken to address the issue. The first is to recognize who benefits from AI use and demand accountability, and the second is to collectively fight the techno-capitalist system that supports its use, whether that be through mass protest, boycotting movements, or even more radical action.
Works Cited
“AI Has an Environmental Problem. Here’s What the World Can Do About That.” UN – Environmental Programme, 13 Nov. 2025, https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-whatworld-can-do-about.
Chow, Andrew R. “ChatGPT May be Eroding Critical Thinking Skills, According to a New MIT Study.” Time, 23 Jun. 2025, https://time.com/7295195/ai-chatgpt-google-learning-school/.
Crary, Jonathan. Scorched Earth: Beyond the Digital Age to a Post-Capitalist World. Verso, 2022.
Duarte, Fabio. “Number of ChatGPT Users (November 2025).” Exploding Topics, 31 Oct. 2025, https://explodingtopics.com/blog/chatgpt-users.
“Homelessness and Housing in Canada: A Human Rights Crisis.” Tamarack Institute, 16 Oct. 2025, https://www.tamarackcommunity.ca/articles/homelessness-and-housing-in-canada-a-hum
an-rights-crisis.
Hughes, Abby. “Canadian Economy Bled 66,000 Jobs in August as Unemployment Rate at its Highest Since ‘Pandemic Days.’” CBC News, 5 Sept. 2025, https://www.cbc.ca/news/business/canadian-economy-bled-66-000-jobs-in-august-as-unemployment-rate-at-its-highest-since-pandemic-days-1.7625918.
Igini, Martina. “36 Fossil Fuel Giants Responsible For Half of World’s CO2 Emissions: Report.” Earth.org, 11 Mar. 2025, https://earth.org/36-fossil-fuel-giants-responsible-for-half-of-worlds-co2-emissions-report/.
Klein, Naomi and Astra Taylor. “The Rise of End Times Fascism.” The Guardian, 13 Apr. 2025,
https://www.theguardian.com/us-news/ng-interactive/2025/apr/13/end-times-fascism-farright-trump-musk.
Kumar, Naveen. “71 AI in Education Statistics 2025 – Global Trends.” DemandSage, 4 Nov. 2025, https://www.demandsage.com/ai-in-education-statistics/.
Mayer, Hannah et al. “Superagency in the Workplace: Empowering People to Unlock AI’s Full Potential.” McKinsey & Company, 28 Jan. 2025, https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-work place-empowering-people-to-unlock-ais-full-potential-at-work.
Nguyen, Thinh. “Rising Cost of Living Pushing More Atlantic Canadians Into Debt Cycle, Experts Say.” CBC News, 8 Nov. 2025, https://www.cbc.ca/news/canada/prince-edward-island/pei-affordability-credit-debt-9.6964313.
Oldfield, Elizabeth. “Corporations vs. Consumers: Who is Really to Blame for Climate Change?” The University of Manchester, 7 Jul. 2022, https://sites.manchester.ac.uk/global-social-challenges/2022/07/07/corporations-vs-consumers-who-is-really-to-blame-for-climate-change/.
Riley, Tess. “Just 100 Companies Responsible for 71% of Global Emissions, Study Says.” The
Guardian, 10 Jul. 2017, https://www.theguardian.com/sustainable-business/2017/jul/10/100-fossil-fuel-companies-investors-responsible-71-global-emissions-cdp-study-climate-change.
Roxin, Ioan. “Generative AI: the Risk of Cognitive Atrophy.” Polytechnique Insights, 3 July 2024, https://www.polytechnique-insights.com/en/columns/neuroscience/generative-ai-the-riskof cognitive-atrophy/.
Shrivastava, Rashi. “AI 50.” Forbes, 10 Apr. 2025, https://www.forbes.com/lists/ai50/.
Singh, Shubham. “ChatGPT Users Stats (December 2025) – Growth & Usage Data.” DemandSage, 20 Nov. 2025, https://www.demandsage.com/chatgpt-statistics/.
Smith, Shaina. Lack of Awareness of the Environmental Impacts of Artificial Intelligence. Thesis, Queen’s University, 2024, https://www.proquest.com/docview/3161888918?%20Theses&fromopenview=true&pq-origsite=gscholar&sourcetype=Dissertations%20.
Stambor, Zak. “The US Faces a Deepening Cost-of-Living Crisis That is Unlikely to Ease Anytime Soon.” eMarketer, 1 Oct. 2025, https://www.emarketer.com/content/us-cost-of-living-crisis-wages-expenses-retail-impact.
“The Role of AI in the Future of Computer Science.” Monash University, Feb. 2025, https://online.monash.edu/news/role-of-ai-in-future-of-computer-science/.
Vallier, Kevin. “Neoliberalism.” Stanford Encyclopedia of Philosophy Archive, 9 Jun. 2021,
https://plato.stanford.edu/archives/win2022/entries/neoliberalism/.
Vilcarino, Jennifer and Lauraine Langreo. “Rising Use of AI in Schools Comes with Big Downsides for Students.” Education Week, 8 Oct. 2025, https://www.edweek.org/technology/rising-use-of-ai-in-schools-comes-with-big-downsides-for-students/2025/10.
Walton, Nigel and Bhabani Shankar Nayak. “Rethinking of Marxist Perspectives on Big Data, Artificial Intelligence (AI) and Capitalist Economic Development.” Technological Forecasting and Social Change, vol. 166, 2021, pp. 1-8, https://doi.org/10.1016/j.techfore.2021.120576.
Zewe, Adam. “Explained: Generative AI’s Environmental Impact.” MIT News, 17 Jan. 2025, https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117.
Zirar, Araz et al. “Worker and Workplace Artificial Intelligence (AI) Coexistence: Emerging Themes and Research Agenda.” Technovation, vol. 124, 2023, pp. 1-17, https://doi.org/10.1016/j.technovation.2023.102747.
Read more at Jillian Vandervoort.
Articles, Education, ResistanceRelated News
News Listing
Gavin Minard ➚
Truth to Power: ‘The Dark Crystal’ and the Fight Against the Disimagination Machine
Articles, Public Education, Resistance
March 8, 2026
By Maya Phillips ➚
Failure to Reeducate: Perpetuations of Cultural Fascism in Post-War Germany
Articles, Cultural Pedagogy, Public Pedagogy
February 9, 2026