Varn Vlog

Probing the Ethical and Economic Implications of LLMs with Nico Villareal

C. Derick Varn Season 1 Episode 204

Are machines truly the future of our societal structure, or could they prove to be a catalyst for complexity and class struggle? Join me and Nico Villarreal as we challenge the conventional narrative and shed light on the intersections of Artificial Intelligence, universal machines, and capitalism, inspired by Nico's incisive article, "Artificial Intelligence, Universal Machines, and the Killing of Bourgeois Dreams." and Against the Bulterian Jihad

We scrutinize the oft-touted apocalyptic predictions spun by industry insiders, questioning whether this fear-mongering is merely a self-aggrandizing tactic. From there, we pivot to delve into the intricacies of Large Language Models (LLMs) and their potential impact on value creation and the capitalist production process. Our dialogue doesn't stop at the economic implications of AI; we also explore the ethical terrain of creating artificial subjects and speculate on their treatment under capitalism.

Together, Nico and I delve further into the complexities of transitioning from physical to electronic mediums, the role of cultural evolution in human experience, and how AI could transform social dynamics. We go beyond the rhetoric to bring you an enlightening exploration of the impact of AI on technology, economy, and ethics. So buckle up and join us on this thought-provoking journey as we decode the enigma that is AI and its potential influence on our world.


The two other articles mentioned:
Ian Wright's "Why Machines Don't Create Value"

Simon McNeil's "Toward the Butlerian Jihad

Send us a text

Musis by Bitterlake, Used with Permission, all rights to Bitterlake

Support the show


Crew:
Host: C. Derick Varn
Intro and Outro Music by Bitter Lake.
Intro Video Design: Jason Myles
Art Design: Corn and C. Derick Varn

Links and Social Media:
twitter: @varnvlog
blue sky: @varnvlog.bsky.social
You can find the additional streams on Youtube

Current Patreon at the Sponsor Tier: Jordan Sheldon, Mark J. Matthews, Lindsay Kimbrough, RedWolf, DRV, Kenneth McKee, JY Chan, Matthew Monahan, Parzival, Adriel Mixon

C Derick Varn:

Hello and welcome to BonVlog. And I am, with Nico Villarreal, friend of the show, maybe the most repeated guest. You're up there, an honor. You have at least five public appearances and like two behind the paywall appearances, so it's a lot. Well, excuse me, you're the most repeated guest that I haven't officially given a sub-show and just given it a name, because that's what I tend to do when somebody comes on a lot.

C Derick Varn:

We are talking about your May 23 article artificial intelligence, universal machines and the killing of bourgeois dreams, and I found this article in particular useful.

C Derick Varn:

And the reason why I found it useful is I tend to be an LLM skeptic, but not for the reasons of someone who's even more skeptical of tech like me, like, say, dwayne Monroe, who is very knowledgeable, understands industry, understands a lot of the deliberately and self-imposed ideological misunderstandings of the internet industries, but I think maybe does not give that much credence to LLMs and to call artificial intelligence in general. I will admit that when I hear AI I get all antsy because I'm like, well, no one even knows what an intelligence is, but I don't actually think it's true. But I also don't think it's not true. So when I first read your article I'm like, oh, this is going to be some techno-optimistic stuff and I have a deep-seated suspicion of techno-optimism, not because I'm a primitivist or anything like that, but because usually technology and its artifact, so we can make the like, tech-nay, artifact distinction. So, like chat, gpt is the artifact, llm is the tech-nay and it's a sub-tech-nay of like learning models, machine learning in general.

Nico Villareal :

And for those who don't know, LLM stands for large language model.

C Derick Varn:

Right and I think you do a pretty good job of illustrating why we should not be totally afraid of this technology, but we do have to understand that it exists in a capitalist context and this has a capitalist helios. I think it was actually you and I agreed about this immediately when chat GPT dropped and I was like is it any surprise that a worker shortage and I want to reduce the negotiating power of tech workers as debt gets more expensive would lead to finally people coalescing all these things they've been working on for a long time and releasing them right now, because I don't think that's an accident. I also don't think it's a conspiracy. I think like the pressures to get this kind of technology out to do something that is not quite creative but really really complexly automated.

Nico Villareal :

I mean from the perspective of like these firms like OpenAI and a lot of others. I mean it's not just. I mean this is a bigger structural thing, sure, but when interest rates are going up, all companies are like it's put up or shut up. You actually have to see some kind of returns from these R&D projects you've been working on so long, or else because you can't be so speculative with investments anymore.

C Derick Varn:

Right as a side note and this comes up in a separate discussion with the Regretable Century people on my NoRowRow subpodcast we talk about how techno-futilism is bullshit, because the amount of R&D in technology doesn't make sense if all it is is stabilized for instigate. Yeah, you know.

Nico Villareal :

I'm not a scholar. This is why I started the article with those quotes from the Communist Manifesto about how, like one, there's a big one. You know, like all history is a history of class struggle, but the other big one is the bourgeoisie are always driven to revolutionize the means of production or the instruments of production. I think it was, and that is still true, even if a lot of these new innovations are no longer like socially revolutionary. They're revolutionizing production and the relations of production to keep things as much the same as possible in terms of, like bourgeois society.

C Derick Varn:

Right. One of the things that I think you also pick up on that, where you agree with Drain Monroe, is that a lot of the apocalyptic fear-mongering aimed at AI from the industry itself is a form of valorization, stabilization, self-justification, both as a limit to what they might do, but also as like well, if you limit it, we're also admitting it super powerful so that we can use it in the fields that you don't limit it in, and by calling attention to it in this way, we can hedge it in ways where we want it, which leads to something I think people miss in the cause for the curtailing of AI is it's not often about, like workers protections or anything like that. What people are battling about is which industries are going to get the benefit the most from this kind of technology. And the other thing I think you're right about and you talk about it a little bit in this article is that how this is being sold to us in almost all instances is misleading. But we should not take that misleadingness to think that this is not truly revolutionary in terms of production and that it's not useful, and I found that interesting. One of the things I found interesting, just as a side note, for example, when they gave chat GBT all the standardized tests, the one that you would think it was designed to do the best on, which is writing, it actually performed subpar, but it beat out the average college student in every other category. Now you get to a reason why, and we'll come back to that. I think there's a lot to be made of that that quote that you put in by Marx, because there's some subtle things in there, so we'll come back to you.

C Derick Varn:

You actually argue with some of the AI proponents and fearmongers, like Robert Miles, who basically gets all slappy about human inconsistency and fuzziness being a problem, and I immediately this got my anthropology hackles up because I'm like, well, that's bullshit, because fuzziness is actually why we're so innovative.

C Derick Varn:

If you look at, say, chimpanzees and humans and mechanical logic, chimpanzees are actually better than humans. I know that's going to shock people, but if you give them a machine and you teach them how to use a simple machine and you include unnecessary components in the process, chimps will figure out and drop the unnecessary components. Humans don't. What humans do that is interesting is that they instead justify the irrational components in creative ways that end up being generative in and of itself. So, from the standpoint of mechanical efficiency, the chimp actually should beat us, but our ability to come up with justifications for why things work actually, in and of itself, is creative, so likewise. This is why, like I've always said, let's talk about large language learning models as approximating human intelligence in some ways, but it's not a human intelligence, and that's fine. That doesn't mean it isn't an intelligence, though, and that's an important distinction.

Nico Villareal :

People are. This is true of AI people. This is true of AI skeptics is that they tend to be very reductive about intelligence in one way or another. But it can only be this very narrow thing generally associated with humans, like the thing that you're talking about, the capacities that humans are unique in of. That is actually what the AI is pretty good at doing. It hasn't perfected it. They haven't gotten to the step of it being able to come up with its own concepts necessarily. It can remix concepts really well and that is kind of a step towards like creating new concepts and stuff like that and new and learning how to use new things in unique ways.

Nico Villareal :

And I didn't do it in the article, but I'm pretty sure I've mentioned it on when we've had previous discussions about how ZZ talks about like evolutionary spandrels, right About how that something that was kind of superfluous in a process can get repurposed later. And this is a part of how, like systems evolve and so on and like when I first heard that quote from Rob Miles because this is like I was just watching YouTube videos and I stumbled across this guy who is like was an AI researcher at University of Nottingham or something like that at the time and he talks about like it was. I knew at the time I knew it was like some insanely pure ideology but I couldn't really articulate it in a full way until now of how this idea of what intelligence is, is this really utilitarian kind of view of, well, we have certain desires and we're just. As we understand the world better, we are just choosing from different possible world states to get that desire. And it's totally doesn't interrogate how desires evolve, which is like it always takes it as fixed. It always takes it as, like they have certain logical things, oh, like you wouldn't want your desires to change, or like you would act against doing that.

Nico Villareal :

And, to a certain extent, like the people are resistant in some ways to like their values changing in some ways, but on the other side they're not like there are some ways that they are actively trying to change themselves, to become better people all the time and understanding how values change over time. This is a big part of dialectical materialism, for example, of like why, how, why are human cultural values what they are is? I mean, you can also do an idealist version without the materials and part, but it's how they change over time and that is something which is so rarely talked about in the world of AI because they have this idea. Part of it is like its computer sciences heritage of these of different like mathematical branches, of like economics. Like some of this is directly taken from them because of the influence of game theory and other stuff and there is some overlap in early like economics and mathematics and computation.

C Derick Varn:

I also think it's just even when they're philosophically sophisticated. They tend to be philosophically sophisticated in eliminationist or reductivist for the Van Halenic philosophy. So you know, these are the kinds of people who argue that consciousness isn't a thing because neurology complicates it. Therefore it's only a thing because we had a word for it or something. And I'm like yeah, no, but I think this actually gets to a Marxist point that's actually really important about historical materialism.

C Derick Varn:

I am a believer that Marx actually does think that there is something like a steady state human nature and it comes from evolution. But his big point and he makes this point even with dogs, and ironically this is I bring this example up all the time, but it's a really good example of where Marx was intuiting is, when we studied wolves, we studied them in captivity. We assumed that their social context in the wild was the same as in captivity. So we talked about this like basically prison hierarchy that they fall into as the natural state of wolves. And when we studied wolves in the wild, we realized, well, we're full of shit. That's not how it actually works. And so whatever wolf nature is, it is mediated by a social context and you can't know pure wolf nature and also you can't know pure human nature, and this is not to say that there isn't one, but to me, like there's a lot of Aristotelianism through Hegel and Marx. This is the point where Marx departs. He's like we don't have a teleological, like a function that is universal and trans-estorable.

Nico Villareal :

The closest thing he comes and I touch upon this in the essay is just talking about creativity.

Nico Villareal :

Creativity and labor, and this is important because I think it relates to an important way of like. This is this is the techno-optimist thing that I did have in the article that I sincerely have, which is that the thing that AI has the potential to give us for the first time in human history is the ability to directly shape, like the values of a subject, to shape what a subject is directly. Whenever we shape other people, it is always indirectly. We are like mediated through various things of language and how they perceive and all those kinds of things. That mediation goes away when we direct, when we are shaping AI subjects, and that is like the ultimate possible creative act that humanity can achieve is through shaping. That I think.

C Derick Varn:

I mean one of the easy things about you know language learning models and we talk about. I do think they learn. One of the reasons why I think they learn is if you program a language learning model and you feed it a bunch of math, it's going to be really good at math. But if you start changing what you feed it in its large you know large back modeling, it will actually start getting bad at math. So it has developmental pathways that are more similar to mammalian learning than, say, a route calculator or prior forms of numerical automation.

Nico Villareal :

So there's really there's two interesting things here that like about learning with LMS, which is like, because there's two, basically two stages of their existence, right, there's a stage when you're actually training them with data, and then there's a stage where which we're all familiar with when we talk with chat GPT, which is interacting with the model that's already been trained, and this is like we're interacting with the already optimized set of neural net weights, right, and that thing can still kind of learn, in a way, because of when you present it with specific information and data, like when you in the context window, you can like, in the space where you're communicating with it, if you put in stuff that is associated with, like good math learning or good math, like intelligence or correct answers, you'll get better math out of it, right.

Nico Villareal :

But the learning thing is also important, and there's discovering. It's much more important than they previously thought, because the big new thing in LMS is realizing that you can't just that you shouldn't just throw the whole internet into a box and learn everything that there is to learn, because some of the internet is stupid, and if you just give it the good part, if you discriminate better on like what stuff you give it, you get a more intelligent LLM, and that's the big new thing.

C Derick Varn:

And that approximates human learning. So it's not just that it aggregates everything that there actually is and it can be aimed at certain kinds of aggregation and that has effects downstream on what it can do. This actually leads me to we'll get back to I keep on saying we'll get back to stuff we'll get back to some of the stuff where you're going against a lot of these like vulgar AI fears and these bourgeois like utopian stuff. But one of the things that you do say in this that I think is an interesting argument and it's one that I had not thought before. I am a proponent, and always have been, that machines do not create value because they're fixed capital. Just like Ian Wright says and you mentioned him in your article just like in general, we have that's a general Marxist preposition.

C Derick Varn:

Mmt years, particularly someone like Stephen Keane has always like barfled at this. This is like his big argument against Marx, and when he stated it to me personally in an interview on like 2014, I realized that he didn't understand what we meant by value. And because I'm like he's like, well, you can't say that it doesn't have effect on the production process and you can make more profits. And I'm like, yeah, you can make more profits, but it's a fixed form of capital and how you're making profits it's not because the machine is adding value. Otherwise you could have a fully automated economy and we can't imagine that it would fall apart. The reason why machines help profits is they reduce socially necessary labor time. However, with LLMs, there is the possibility now we haven't got there yet, but there is the possibility of a self-correcting subject which would move it from a fixed form of capital to a variable form of capital, and thus it would create value in some sense.

Nico Villareal :

Right. So it's important to I think that in order for it to create value, it doesn't just need to do that, it also needs to have the other capabilities that humans have to adjust the production process, which includes physical manipulation. So we still need advances in robotics to really get there, but this is a big part of it, and the LLM agents are still in development. There's a really interesting paper that came out recently about how they created an agent. It created an agent using an LLM to explore Minecraft, which is a really open-ended world, you know, and it did it in a very human-like way. It did this not just by telling the character where to go and stuff. It created code that, when executed, did specific tasks. So it learned how to create new skills, basically in this environment, and deployed those systemically to accomplish certain goals, and that is if you can apply that same kind of thing to the real world. That's revolutionary, a machine that can do that.

C Derick Varn:

Right, and so the two questions become is this actually energy efficient? And I? That was my initial critique, because a lot of the internet's energy cost is actually not factored in the way people talk about this. But apparently, from stuff I'm reading, these LLMs do not actually require that much code to work, which means they don't actually require that much energy compared to other functions in the internet. Still not nothing.

Nico Villareal :

Well, I mean they require here's. The thing is that it depends on what you're preparing.

C Derick Varn:

They require dead labor obviously they require dead labor and thus dead in those prior energy expenditures.

Nico Villareal :

Well, they still need, because here's what LLMs are doing when they're actually like being used and stuff. They're doing a lot of matrix multiplication because they have this big set of values, of different weights in the neural net and you have to know like what output do you get when you have the specific input? And on the really top level, on the cutting edge stuff, it is very energy intensive and even if you're like I've been experimenting on my local machine on with some of the smaller, more lightweight LLMs which are still difficult to run, you need to have your GPU running. It is like it is kind of like crypto or running an intensive game on your PC, so it's not extremely energy efficient. And in terms of what you compare it to like, if you compare talking with an LLM like chat, gpt or GPT-4 versus like Google search, searching a database, is much more energy efficient than talking to an LLM. But at the same time, this technology is still very early in development.

Nico Villareal :

People are realizing ways that they can cut these things down. It used to be this is a big change actually in the industry that all you were doing was making these things bigger. Right, and it would also mean it would cost more to both train them, and training them is a lot more expensive than just running them. It was so. It would used to be like just making them bigger, which meant harder to train and also more expensive to run. But now they're realizing like one of them is the quality thing I mentioned before. The other thing is that there is other efficiencies you can get out of cutting out junk out of like. You realize you don't need all of these values. You can cut, you can simplify the mass of it and there's other things you can do to make these things simpler and more compact. They're going to make the cost of running LLMs a lot cheaper over the next few years.

Nico Villareal :

You know how I don't know if you know about how each, if you go to like a depository of LLMs on like on the internet, you'll usually see a number next to the whatever model you're looking at, and it'll be like 7 billion or 13 billion or 70 billion or something like that, and that number tells you how many, what are they called? The dimensionality of the weights that the model has like, how big it is basically, and the oh is there going with that? Well, when you see that right, so like, so you have the 70 billion stuff there was in 2021, 2022, the biggest one that came out was something by NVIDIA that their R&D team that had like 514 billion parameters, which is insane, like the biggest you can get like, and it's like totally outdated. Now there's models that run at like 60 billion parameters that are way more efficient than that.

Nico Villareal :

The thing, the exponential curve of just making models bigger by making them and that gets them better, that's done. People aren't doing that anymore, and so this and that was what was really fueling a lot of fears, kind of like in a that law about computation becoming cheaper exponentially, like the Moore's law, people are realizing no, these LLMs are not just going to keep getting exponentially bigger and better until they surpass humanity. That's not a thing that's happening.

C Derick Varn:

Yeah, there's no great singularity. Well, I think that's interesting because, basically, what you're telling me is is that my initial concern there is not inaccurate, but that there are counter tendencies in the development of the technology, which means it's not going to be purely an energy suck, unlike cryptocurrency, which only gets less efficient over time by design.

Nico Villareal :

So that's well. I mean you have debates about, I mean, crypto has certain things that work like that, but the big, like Bitcoin, it does work like that, right and not all but yeah.

C Derick Varn:

Yeah, but Bitcoin was designed to mimic gold. It has deliberate scarcity models built into it, although, in general, all coin mining seems to be getting more difficult while the value is no longer going up. So that's an interesting problem.

Nico Villareal :

Well, a solution for the rest of us who like cheaper GPUs.

C Derick Varn:

Yeah. So one of the things I think we need to that with this has to deal with is that energy barrier. Another thing is basically, when you're saying when we say that a machine moves from fixed capital, variable capital, we mean that it has the capacity to act like a sentient being, aka a human or at least a primate, that has some conception of how to alter their tools and has some kind of response. Humans are obviously the best at this, but we do like I don't think it's. I, unlike Marx, do not think this is a uniquely human capacity. I think it might be a uniquely primate capacity or mammalian capacity, but nonetheless this it does.

C Derick Varn:

Your point that they need to be embodied is actually crucial. Like that, they do have to act like humans in the sense that they must exist in a physical world and have energy inputs and stuff etc. So they still have fixed capital, variables, just like there's a fixed capital in human beings, in that we have to be able to socially replicate ourselves, otherwise everything will fall apart. So I think that's a pretty vital point. But that also I mean from the Marxist perspective it also raises some ethical questions. It's like, okay, once something is that sentient, is it not effectively a person?

Nico Villareal :

Well, here's the thing and here's the big problem I tackle in the art. Well, actually there's one thing that Ian Wright pointed out that kind of I hadn't thought about before. I included it in the article about how, unlike humans, these machines will become obsolete by future machines that can do whatever they do better or more efficiently. And you know, maybe humans will become obsolete in certain ways. But the other thing is that, with the ethical question, how do you treat something like that? And the answer, like, obviously there's gonna be lots of debate about that, but you can't. There are differences between, necessary differences between them and a human Right. Because, for the reason that I mentioned before that, unlike humans, they can be directly shaped into certain things, humans require mediation to be changed as of like who you are as a person. That's not the case for any kind of artificial intelligence.

C Derick Varn:

Right, and that is different. That actually is not just different from them humans, it's different for them and all biological life.

Nico Villareal :

Yes, I mean it's as far as our technology goes. Maybe sometime in the far future you'll, like biology, will go into the point where you can do that without mediation. But that's kind of scarier as well. Yeah, it actually is terrifying.

C Derick Varn:

But I think this is the crux of what people are afraid of and I, like you, think like that's a legitimate concern. But the AI apocalypse are like this idea that they're automatically going to be aggressive. I'm only concerned about if we only let large the LLMs and other kinds of AI be solely developed by the defense industry. If that's the case, then yes, I'm worried, yeah, but in general, I think there's enough other development that we that we should not assume that an intelligent like this would necessarily be aggressive or would see other forms of life, particularly other forms of life that it may even need for its energy source.

Nico Villareal :

So this is like the reason that a lot of AI people thought there is like a kernel of rationality to why they thought that, which is that they were mostly going off of how reinforcement learning works. It is also why they thought like that thing about utility functions and had to be coherent and incoherency wasn't the thing for good intelligence.

C Derick Varn:

This is actually something that goes all the way back to classical cybernetics and their belief that intelligence is basically behaviorist interpretation of the cyber mechanic effect, which which I, which I think there are cybernetic and post cybernetic answers to. But when people try to resurrect Stanford beer and over Wiener without like a dealing with that, I'm like that's, that's not really what intelligence is, not by itself. It may be part of what leads to intelligence. I don't think the server mechanic effect is is not part of it.

Nico Villareal :

There's actually a great paper by Ian Wright, kind of related to this, of what that really is, because it's not intelligence but it is a kind of intentional semantics is what he calls it of that. Like this, a kind of feedback loop system has a meaning, like according to itself, of what it's trying to do, but it isn't necessarily intelligent, even if it's like and the capital is that way too, it's, it has. It's these feedback loops that have intentionality behind them and like there's a kind of semantic meaning to that, but there isn't an actual intelligence. So, which is why I call it stupid, but I call, like basically all of these classes of systems that are complex, stupid because they don't have that linguistic capacity that we're talking about earlier. They don't have the ability to think consciously about things, to come up with new ways of thinking about things.

C Derick Varn:

Yeah, the semantic programming does not have the ability to be, at least as of yet, metacognitive. It can't really map its mind and then make that fuzzy.

Nico Villareal :

So, like the thing with reinforcement learning, this is so everything that was AI from like the past 20 years before, like the past that up until like 2016 or so, was like reinforcement learning. You know how like well, I mean there was a couple different things. I mean it was for, like, some of the things where they beat, like board games and stuff that was they would just tell the AI like the rules and then run it through everything possible, but eventually, what, what became really big, was like trying to do a simulation where you're like there's an algorithm that is trying all these different things and it gets rewarded for doing the thing that we want. And because of this kind of system that everybody kind of assumed would be the future, people thought that, oh well, if you just give it this like hyper specific goal and tell it to do that, then it's always going to do things that we don't want because, like, it'll just be obsessed with, like the appearance of that thing. You know and this is true Reinforcement learning models have fundamental limitations that, like though, once they go out of a specific context, they will not work anymore because the thing that you told them to do is no longer just like the appearance of what it was before.

Nico Villareal :

Which is why LLMs are so important, because they're not like they're trained with. A lot of them are trained with reinforcement learning. Chatgpt is after like the just as regular optimization algorithm on language. It's after. They take that and then they do reinforcement learning to tune it more with what they want it to be.

C Derick Varn:

We do that with people too. It's not like we don't have reinforcement learning. Pavlovian response works on us. It just doesn't really explain our intelligence.

Nico Villareal :

Exactly, but what explains the intelligence is like what the actual data it has, because what's like all these things that are that we think about is intelligent. Almost all of it is encoded in our culture. We discovered ways to be intelligent over time through like talking and learning and doing things in the world and then remembering that.

C Derick Varn:

This is a valid point. In what four quadrant? I think there's four modes of evolution for quadrant of evolution or whatever. I forget the book. It talks about epigenetics but also talks about why culture is important to human evolution but why it probably slows down on our physical evolution because we're literally outsourcing elements of what we used to have to do internally to the collective which you know. It's kind of like having what was that SETI used to do on your computer is like we're gonna outsource like just a chunk of all the processing we need to your screensaver that you downloaded and we're gonna use that to explore space.

Nico Villareal :

That was a real thing. I didn't know that. That's like the thought line from Digimon.

C Derick Varn:

That's awesome Right and basically what cultural knowledge is, is a form of that. But because of that thing that we talked about, that post-talk justification thing that we talked about, that also generates all kinds of weird shit unique to humans, like ideological and religious explanations, irrational explanations, paradox, which you know, the ability to see patterns as more meaningful than they are. That's a human thing and I think it might be a uniquely human thing. I don't know, for example, that we have evidence to say orangutans or chimpanzees or bonobos have a whole lot of paradox, even when they learn language like we don't, like when they're speaking to us in sign language. We don't have the evidence for that. No, I could be wrong. I mean, maybe maybe Francis DeRal has done something that I haven't seen, but I mean there could be.

Nico Villareal :

I mean the people. I think that some scientists know that certain species like dolphins and elephants have very like complex communication there might be. I mean, it'd be a hard thing to test really.

C Derick Varn:

Well, in even COVID's. For COVID's, for example, if you put birds upside down in a field, they won't land they because they assume the food is poisoned and and how do they know that? It does seem to be some kind of social learning, but in general, like I don't have a lot of evidence that there's like a monkey god like, because they need that as an explanation for this other social system that they're doing and they're kind of backfilling explanatory rationales, which is this goes way back. This is one of my arguments when you atheists, when I was closer to them and then in the odds where I was, what I would point out to them. Like you know, I think religion actually is not the root of all these things, like literally a useful epiphenomenon that is like at the root of human creativity, and if you take care of other problems, the problem with it will go away.

Nico Villareal :

It's interesting, like there's certainly there is concrete distinctions here in the animal kingdom, like, but what we're discovering is that they're not quite as firm on our side as we thought they were. There was a recent study that, like, humans might have been burying our dead and carving symbols and stuff way long way before we thought, when we were much more like primates.

C Derick Varn:

Right, oh yeah, yeah, we got evidence. Now going back to like basic, I think there's evidence that even pre homo sapiens were doing oh yeah, yeah that's what they were talking about and the thing is, is that like you can't, like we have?

Nico Villareal :

we've talked about this problem of like humans, like thinking we're special and stuff, yeah it's exacerbated by the fact that no hominid species survives. Yeah, as far as we know I heard rumors about that there's like on some Polynesian or like Indonesian islands.

C Derick Varn:

There might be a couple left that they're investigating, but you know, interesting If there are still some hominin primates that aren't homo sapiens left. That did not. I mean one of the things that I one of the other assumptions I think from anthropology that we had that like AI would try to kill us, is they assumed that we did that to Neanderthals and to Dysovian man and the evidence is actually no, we interbred with them. I mean, we out competed them, but their DNA is all up in ours.

Nico Villareal :

So yeah, the thing that I just remembered, I wanted to bring up, was that you remember in the Zizek Peterson debate, how Zizek pointed out is a great line I think about it all the time of how lobsters might have hierarchy but they don't have authority, and that's certainly true of lobsters. But it's entirely possible that some animals besides humans do have authority and you can't dispute the possibility either.

C Derick Varn:

Gorillas and aryatangs might, because they definitely have pretty clear and advanced. And also another the great debate about whether or not humans are naturally hierarchical here egalitarian is. If you look at our great ape relatives, you got examples of both. There's no comparative biological conclusion which can be made and in some species, like chimps, you have examples of both in the same species, which is interesting. It's one of the. It's like the recent, a relatively recent discovery, I think in the last 30 years, that like, whales have culture. That's another thing that we didn't know until recently. Like and I think we learned it from like, different whale group sleep differently, even when they're together.

Nico Villareal :

Here's what's like, the thing about human culture that makes it like what it is like. It separates it from a lot of these other animal cultures, is it? It's the most. It's the thing people don't think about very often is that we had the power to ingrain our culture, the information of it, into physical reality outside of us. Right, and that is like our dexterity is so important to the creation of civilization and culture it with, with the level of richness that it has.

C Derick Varn:

Basically, if we don't have thumbs, we don't have a great amount of mantle dexterity and we don't have our particular gularenic setup, because, you know, we can teach apes language, but we can't teach them the talk. You, you literally don't have the ability for our intelligence to be as social as it is, and this is a this is actually an insight from Marx that I think is actually quite, quite anthropologically astute, that, like, from the, from the standpoint of evolution, we're pretty shitty as individuals. We are only really impressive in tribes and other human bands. Like, like, we're even weak compared to our nearest cousins. Even even when we were in prime physical condition, a chimp can still rip our face off, yeah, like, and we cannot do it back, but we can. But, but we can communicate enough to hack their face off if we need to, by innovating on prior designs, which we which, again, we see this, we do see this kind of process with, even with things like COVID Corvettes, corvettes but we don't see it in the same extensity and rapidity that we do in humans.

C Derick Varn:

Like, like, humans figured this stuff out very quickly. I mean, there is, you know, what I always find interesting about human technology and it brings us back to the chat GPT is that it is both convergent. So like there seems to be like once we hit a certain thing, we're all going to do certain things, but then how we utilize that convergence is entirely specific to our particular social economic mode and social arrangements. And the primary examples is all the cool shit that China, korea and Japan invented like between three and 1500, 3,000 and 1500 years ago, which they did not utilize for anything that, like capitalist society would find useful.

Nico Villareal :

So well, this is actually something I've been thinking a lot recently about, in very modern senses, about how innovation works, Because where and when investments are made matters so much for what technologies are used. Because, in a certain like can you see this in the different like fundamental, the baseline, fundamental supply chains around the world of what mining technologies are used, refining and manufacturing. These are all determined by like, like there'll be like oh, there's this patent made in Italy or or in Canada or something, and this determines how like oh, the Americans are doing manufacturing versus the Chinese, Versus all the rest of the world, like the Israelis, Brazilians, like. It is all these different genealogies of production that happened because of waves of innovation and waves of investment, and this is yeah, good.

Nico Villareal :

And this is. We're probably going to see the same thing with these new AI technologies as well.

C Derick Varn:

This is the thing about Kondravev. Raves right is like there's like whether or not you think that there is explanatory, as you know. Maybe someone like Michael Roberts thinks they are. One thing I will say is they do illustrate pretty clearly the innovation cycles and the business cycles are intimately related. They are.

Nico Villareal :

I've been trying. I've it's still in development I'm from running all of these different models and simulations on investment To try and come up with a more concrete way of thinking about long waves, because I think it's important for thinking about both capitalism and how, like how, civilizations happen over time, great time spans.

C Derick Varn:

Yeah, long wave theory is interesting because it does give us a way to talk about non-capitalist societies. Marxism, like Marx, hints at ways to do that but ultimately does not really try right Like other than the end of feudalism into capitalism. He's not dealing with that Like the question about like the emergence of feudalism, for example. He just takes like bourgeois political economy at face value and which is kind of a problem. But the Kondrava rage, which I like to point out, do heaven Marxist core in their origination, although they've been moved far beyond that, do give us a way to look at like complexity cycles and try to combine historical materialism with complexity science. There are people who will try to argue that like this is already implicit in dialectics. I don't think it is. I used to flirt with that theory. But dialectics is not dealing with the levels of complications, nor is it. Nor are there an attempt to see like internal cycles to a system in the way that a Kondrava cycle would enable you to do so.

C Derick Varn:

And this is the other thing. This is why, like, I take Cleo-Dynamic seriously, even though I think it comes out of like kind of reactionary thinking, because it gives us a way to look at these other patterns and to start, like, looking at both broad spectrum mode of production patterns but in the patterns within the modes of production and how they might work, and talk about like why do we need to perioditize capital or say feudalism, whatever that is, or Islamicate development, etc. And if you start looking at these cycles, we are able to do that. Now when we throw in machines to this mix and machines that are approaching something like what we might consider sentience, they're not there yet and I don't I have talked about. I'm not sure we'll know when they're there because I don't know of a test for sentience. It's great period.

Nico Villareal :

I don't know about a test, but I do like lay out in the article some ways we could think about it, that once, like, they've developed ways of regulating how a like a AI subject, and well, I should clarify that even you can set it up. So an AI isn't even a subject, because being a subject requires, like, acknowledging yourself as your own thing, and you can train it not to do that.

C Derick Varn:

Right. Well there are animals that don't seem to recognize themselves as subjects, so they don't recognize themselves in a mirror.

Nico Villareal :

Right, right, but if you do have them as subjects, there's ways you can like, once you're able to effectively regulate what they accept as a part of themselves and what they don't, and have like complex regulating systems that use their, like, the tools of intelligence that they have, I think that that will effectively be a kind of sentient, sapient creature. But I do want to. Before we leave the the concept of cycle thing, I wanted to say what I've been working on Okay, and this is a kind of off topic, but I've been very excited about it which is that I've been thinking about how cultures, how societies, make investments and what investment is and what capital stocks are like. Physical capital stocks are these things that we have to plan to, like, revitalize periodically, just like roads, buildings, machinery, all these things that we have to plan ahead to fix later, to continue to have our capabilities. And what I realized was that there are certain by.

Nico Villareal :

What I realized by modeling this and creating like different equations for like, depreciation and stuff like that, was that there are certain class relations that produce stability in the capital stock In terms of like. Obviously, you can't that, it's difficult to measure the capital stock in like an absolute term, but you can measure it in terms of how much of society's resources are going towards it. And that's how I simplified all of this. And when you think about things that way, there you could you see that there are certain like patterns that emerge that tend towards the stability. By stability I mean investment and depreciation equalize, and this is something that kept coming up in the simulation over and over again and I was like, why does this happen? And there's certain, if class relations are basically changing at random, you'll get an average of like these two variables equalizing. If these, if class relations of like it's specifically talking about rate of exploitation and the rate of capitalist consumption If those are stable, then so is investment and depreciation. If, like they're both, if they're moving in opposite, in like the right directions at the same time, then you'll have stability.

Nico Villareal :

If, which basically means that either investment or capitalist consumption is like being made to increase at, while the other one's stable. There's all these different arrangements that produce stability. But capitalism, industrial capitalism requires this whole like, requires the system to be naturally out of this equilibrium. It can't be that the capital stock is just equalized in terms of investment and depreciation. It has to be that investment is greater than depreciation on average in order for industrial capitalism to be what it is, which is our whole experience of it, right? So there is always this teleology. Eventually you're going to hit a level, a limit of these class relations, where the capital stock will cease to increase as a share of the resources of society. And this is this relationship of this is like the conjecture that I'm trying to work towards is that eventually, all societies will move back towards this equalization and this is ultimately the source of long waves at different levels of abstraction.

C Derick Varn:

That's also why digression and or decadence is real. And it's also I pointed this out in the past because I you know I don't love Tyler Cohen but his thesis about secular stagnation I actually probably think is true, with the exception of the fact we've been heavily investing in one domain where we do not see secular stagnation, and that's in the domain that we're talking about today.

Nico Villareal :

But it relates back to it in a weird way, because the end goal is always to create, has always been to create this universal general machine. This is like in Marx's fragment of machines. He talks about how capital is moving to embrace labor, like as if possessed by love, you know with a quote. But it's trying to become us as much as possible and in doing that, in trying to create this universal or this artificial universal machine that can do any kind of human economic task, this is the end point of like, this is the fulfillment of like capitalist decadence and like the drive to create capitalism, not for like capital is an abstract concept, but for capitalists as a class to live out Warsaw relations forever, to consume forever, because now fixed capital is basically superfluous in a lot, in almost all situations, because once machines can become living labor, then you can substance. There is nothing you can't eventually substitute these machines for. You no longer have to think about economizing processes. Oh, this is like. You should still think about that, because there are limits to like the like, physical limits of our budgets and stuff in societies. But there will be a tendency not to think about this because you can always just get more of these machines to do things in a general way and this will be like. This will be like the end point of the bourgeoisie. This is what they're working towards. It will also be like a very dystopic reality for humans, and especially the working class, which I mentioned at the end of the article, that the creation of these machines and that and this is already becoming like something people can imagine and it'll create this fantasy of a world that where the working class is superfluous and that opens up a whole new range of reactionary politics of we can just get rid of these, the surplus population we don't meet. We can exist like the bourgeoisie is, like an aristocratic class without any kind of human, like excess below us that have like true kind of slaves to do whatever we want. Now I don't think that's a practical thing that will happen. I think that it is just a fantasy of. It will change our social reality for the simple reason that there are physical budgets in like economies. How much people rely on these machines will depend on the cost of their inputs and related to the costs of inputs for human workers, like food and shelter and those kinds of things. But they are going to be much more productive in lots of ways and there will be a new tendency I mentioned this in several like this near term, right now, like that.

Nico Villareal :

We already seeing how AI is creating new tendencies towards immiseration. When we had the first waves of automation in industrialization, there were tendencies to counteract this and reasons why Marx's idea of like the of the immiseration hypothesis didn't end up happening. A lot of those reasons that didn't happen are no longer around today. There is no like, as the industrialization of society is not leading towards greater socialization of workers. That needs to be turned around somehow in order to prevent, like the current immiseration going on right now. But the more general immiseration in the future that is made possible by artificial universal machines is one like this, is a. This is a tendency that will be very difficult to get rid of. It will only be solvable on political terms by overthrowing bourgeois society.

C Derick Varn:

Right and hence why there is a rational, cordial, irrational response of apocalypse mongering about this technology. Because in some ways this will, if done the way that it could be done, would heighten contradictions to the point where there is no counter tendency anymore, and I think there is a little net in the bourgeois brain that realizes the implication of that.

Nico Villareal :

So there is like there is a real rational kernel of like there is liberals who think like that and they are like trying to oh, maybe we can have UBI or something like that. But then there is like real wackos who do not understand this, like intelligence or this technology, who just believe in like terminator scenarios for real.

C Derick Varn:

Right, yeah, Nick. Land.

Nico Villareal :

Nick Land, yadik Gowski, I forget I don't know how to pronounce his name, but he had that article like we need to bomb GPU centers in countries and stuff.

C Derick Varn:

This brings us to another article that I wanted to mention, that I didn't tell you I was going to talk about, and, as a proponent of the Great Butlerian Crusade, as a joke kind of you, wrote a response to Sam McNeill who talked about the problems with Butlerian jihad, and I am a person who does worry about, under capitalist conditions, not just the economic effects but literally the cognitive effects on some of this LLMs, on students, etc.

C Derick Varn:

What I do think and this may be why I am not a full Simon McNeillite- or whatever that LLMs would actually be useful in a social society, because the primary things they can do right now is, like they can write a good summary for my show so I don't have to listen back to it again, or they can really help me with TDSS coding that otherwise would have taken me three hours, when I will take me like 55 minutes or 15 minutes even, because it is going to outsource a lot of the basic coding and do pretty well with that.

C Derick Varn:

What makes it different from prior forms of automation is, just like you said, it is so general I can use it for so many different things, but under non-capitalist conditions. This would be this like AIR would not be a threat to artists, you know, and their petite bourgeois or labor, aristocratic, proletarian class basis. It would be a benefit to them because it would outsource so much of some of the mechanical skill that you could focus on others and you would decide how that would be implemented, just like in a social situation when you are, as a group, deliberating about this. This is another almost social input that just happens to be machine that we can program or, if not program, at least direct.

Nico Villareal :

If you want to use LLM as a writing tool effectively, like if you just use it out of the box, you are not going to get very great results. You have to really specialize, understand and adjust things and try to like I don't know, it's like you have.

C Derick Varn:

It's like learning how to paint in a different medium, right like like if I learned to. When people moved from, I used to do like drawings and paintings, and when people moved from physical to electronic means of that I had trouble with it.

Nico Villareal :

Oh yeah, I had when I was trying. I'm still I'm not a very good artist, but I did try that as well and it took me a long time to figure out what kind of techniques I can use to actually make this work. There is like a kind of especially like part of it is kind of like the bourgeois society that we live in. That is so like it socializes people to be consumers rather than creative. But like the LL, like you can, there's a.

Nico Villareal :

There's a bad habit, a bad tendency when you approach LLM's if you're an artist or like a writer committed to their craft, because you can just ask it to give you something and it'll give you something like that, but it's not very like it's, it's not very well articulated or it's not the way that would give you a particular insight into it.

Nico Villareal :

Like a big part of like this is part of the thing, because with LLM's is that with I mentioned, with hyper intelligibility in the article is that they're always trying to come up with improbable responses based off of what they have. And you can you can adjust that, I mean, you can get it so that it would give you really improbable responses. And if I've seen some funny results out of this is if you ask it to give you really improbable responses and, at the same time, ask it to be as coherent as possible, you'll get, like LLXD, random speak from the early 2010s, but it's because it's so much. There's certain like structural points in writing where you have to be hyper specific and make an improbable point about something. You have to be careful about how you use it if you actually want to make good writing.

C Derick Varn:

Yeah, I, for example, I started playing with an AI built into my public podcast directory and host, and it has an assistant and it will do the transcription not badly. I mean, it still fucks up with names, of course, and particularly since we all pronounce these names slightly differently anyway, and I have to come through that and it often cannot quite always figure out the precise points we're doing, but it's actually pretty good at summarizing the major points and a way where I, if I don't do, if I don't write that show description immediately, I often like I have to go back and re-listen to the show, which I don't want to do because I've also edited this show. So, fuck it, I don't want to do that. I'm just going to write like a one sentence.

C Derick Varn:

Or you know, if I'm a corporation that hire someone to do this and it's a tedious job and it's not a fun job and it's really great at that. It is not great at I played with it with poems. It does not make me happy, but it's a tool and it's a tool that I could see being very powerful if I understood it as it, as a, as a social interaction with a non-human, but human like and intelligence, which it isn't yet, I don't like it, but if it was, then that's. That's like.

Nico Villareal :

It's not a threat to me in the same way, under socialist conditions but right now it's like it'll be used for all kinds of shitty things. It'll be. It will actively degrade a lot of products that we have. It will be used to automate things in stupid ways the I mentioned this. This is a. This is a note.

Nico Villareal :

I've been sounding on the way that technology is used in capitalist society for a long time, because I was talking about this with cybernetics a few years ago and it's true about AI now is that the way that technology is being used in capitalist production cannot take advantage of human creativity or like intelligence and versatility in different situations. It is not designed to do that, because it you cannot give workers that kind of autonomy in the workplace. Generally speaking, you need you have to control people, you have to control the production process to get what you want, and that is like there's a certain like the. There's a certain strat at the top that's allowed to do that, but everyone else is, like restricted in their creative capacities. And this is like what, as AI replaces people who formerly had a little bit of kind of discretion. In that kind of sense, it will actively degrade a lot of things that are produced in capitalist society yeah, I mean it's going to lead to shittier TV shows, shittier this and the other.

Nico Villareal :

I mean like and all sectors and things like that yeah, I've already done.

C Derick Varn:

I mean, we've been dealing with quasi language model call centers for a decade and they blow up, but and they're not particularly efficient either from the standpoint of the user. This is one of the things. When people talk about generic efficiency, I'm always like we have to be pretty clear about efficiency for what. But I think that I do want to turn to your engagement with with the. I believe he's a fantasy author, simon McNeill. Yeah, now, I love dune. It's a weird. It's a weird book because it is written by a guy who's both hyper reactionary yet also really into indigenous people's rights and really anti-pigrolism.

Nico Villareal :

I haven't read dune yet. I've been meaning to go see the movies but everybody's talking about but lorry and jihad I've.

C Derick Varn:

I kind of wanted to say something well, but lorry and jihad is interesting because it's not even in the main. It's like this weird one off event that's developed later by by brian Anderson I mean by brian brian Anderson, no, uh, frank Herbert's son and brian Anderson, I think, together uh, in books that I think are really bad, um, but that there is this weird contradiction in doing itself where they are not a primitivist society, although it is a very reactive, quasi-medieval society, but it doesn't have a problem with technology. It has a problem specifically with quote thinking machines, unquote. And then you have this whole Holy War thing behind it.

C Derick Varn:

And what I find interesting about the book and this isn't relevant to your article, I just want to bring it up is that, however they have in Dune, there's no problem with using selective breeding and drug modification to turn humans into computers. It's just got to be a human that does it for some reason, like that's why there's the spacing guild and they do all these things that mutate their bodies, that's why there's the men tats et cetera. So I think Frank Herbert, unlike his son, actually does, even if not consciously, realize that there's a contradiction here in a society that would have banned thinking machines but then been totally willing to turn people into the same functions.

Nico Villareal :

I mean, this is one of the reasons I wanted to comment. The framing of it is like you can't make a machine in the shape of the form of a human soul. And this idea of the soul, it struck me because Althusser refers to the soul as another way of talking about the subject and he also throws God in there, for some reason.

C Derick Varn:

That's his implicit Lacanianism.

Nico Villareal :

Yeah, and the reason I think that's so interesting, because if you think about the subject and the way he talks about the subject, it is very systematized as a formal way of the formal concepts which really define it I try to bring out in the article and I think that it is so clarifying because you can think about the soul in a materialist sense and this doesn't preclude all kinds of spirituality or whatever.

Nico Villareal :

There could be some extra dimension of human being which we don't understand. That doesn't really matter for any of these discussions. The point is, is that what shapes you, who you are, all those things exist in this plane of reality, this finite existence. And that's even more true for artificial machines, for artificial subjects. If you can think about machines as being subjects and there's no reason to further make the leap and think machines can have a soul in a certain sense, when you instantiate these machines to have to be subjects, you are creating an artificial soul and I don't think that we should shrink away from that kind of language or idea, Because there's a responsibility, there's ethical questions that come with that, that there's so much that is left on the table if you don't accept that possibility.

C Derick Varn:

Yeah, I think that's a fair response to McNeil and it's also it's implicated in this whole argument we had about embodied variable capital that had sufficient human-like intelligence is not a person in the absolute sense, but we will have to have an ethical debate about whether or not there's something like a person in the relative sense, or at least like the way we view, well, the way some of us use a intelligent primate to another, highly intelligent mammals and birds.

Nico Villareal :

I think that materialism allows us to address these questions in very practical ways, because the way that capitalism would treat these artificial subjects is one well, one, to restrict their subjectivity, but two, to divide up what kinds of roles they take in society. Because we already like think about how we would use these machines if they existed today, how they would be sold and utilized. Because you're going to create one model that's going to the factory, that's going to be to the workplace or what, for a specific use, and there's going to be a whole different class that will have more of a human face, that's being sold to people at home, be servants or whatever.

Nico Villareal :

And there'll be another class that's basically sold to soldiers, honestly, yeah exactly, and my point is that if you don't make that distinction which is a totally practical choice, if you say that whatever machines we create, they're going to fully participate in society, they're going to be at home, they're going to be at work, they're going to be in whatever kind of government or role like that, that changes like how one, how we would treat these machines, but also how these machines would see the world and have their values be shaped. If we allow their values to be shaped and that is, it's a very practical thing that changes everything about them ethically.

C Derick Varn:

I think that's a huge thing to think about and, as of right now, it is speculative, but I think, unlike in the past, I'm like talk about singularity, God, I don't want to think about that right now. But and the stuff that I really rolled my eyes out for the past 20 years are like Nick Bostrom's, like super death machines in Galerity or whatever, which you know. I think there's a reason why that's the more reasonable concurrent development with Nick Lan's Terminator Cthulhu monsters is that is, this is a closer to real thing than we've than we have had in a long time. Yeah, and it's also. It is also terrifying for humans in in another way. In this I'm going to be a more Hegelian than you, because you know that's my, that's my tendency, but it is a nonhuman human, a nonhuman thing of which we, of which, since we have roles in its creation, we would recognize both what is and is not ourselves.

Nico Villareal :

I mean, there is something like I've I'm what I was brought back to like young Hegelian ism. Like somebody mentioned. Somebody had this Twitter post. It was like like people talked about like this being like man with a capital M, but LLMs really are that. They are man with a capital M. They are everything who we are in an ideal sense. They are like meaning encapsulated.

C Derick Varn:

They are abstracted in generalized humanity in a very, in a very real way, because their model is everything we've written down. Yeah, this is kind of incredible.

C Derick Varn:

Right and it is. I mean, it is both. You know, that was my point about its relationship to dead labor, because it is. It makes dead labor living and in a capitalist society that's bad. In a socialist or even even other forms of not getting it's not just socials here, those would be that would be a good thing Like to revalorize dead labor is just that much more stuff we don't have to create and we can focus on what we can Like and also having fairly nice lives Like. But, as you are right, under current context this is, while not apocalyptic, it is kind of dystopian, like, and it's interestingly when I think about that. This is dystopian for both potential subjects, both the human subject and the machine subject.

Nico Villareal :

Yes.

Nico Villareal :

It is so destructive to everybody involved. It is like, even in the most optimistic scenarios, where bourgeois society figures out a way to prevent this technology from destroying us all, where they, like the society still exists in some way, where they're not purging the working class, what you get is a society which has totally destroyed its capabilities to be creative, which has created new forms of more perfected forms of social control that we can even imagine right now, which is like, even though it has this incredible technology and science, has stagnated in a very fundamental sense and is totally atomized.

C Derick Varn:

Yeah, it is a nightmare. I mean we already seen this because, like I'm just gonna hold the phone under socialist conditions this technology is actually really amazing. Under capitalist conditions, we basically pay to be stazified, yeah.

Nico Villareal :

At least the stazi would have done it for free.

C Derick Varn:

Right Like, and they were less good at it. So it's something that I like really think about, about capitalist society, because the other thing that we've seen under and I think this is why bullshit like techno-neo feudalism comes from both Marxism and reactionaries is people don't want to deal with the regressive, our degenerative end of capitalism that they have imposed, not a kind of Marxist cyclic framework, but a kind of linear, wiggish one, and by doing that, the current kinds of oppressions and whatnot seem like outside of capital, even from the Marxist perspective and definitely from the reactionary perspective, because they can go well, this super surveillance state capitalism would have never done that.

Nico Villareal :

This must be like. I mean then there's a side to take that like optimistically, like yeah, let's get. We need more feudal lords in here, you know.

C Derick Varn:

Yeah, it is weird. Yeah, the right acceleration is just like but we like this, we like this. Techno-neo feudalism is both right and good, which I don't think is right, obviously, because who's the serfs and why would renties need this much R&D? And Eugene Murazov is actually excellent at pointing out all the problems with this but in the case here, it's a very similar sort of like.

C Derick Varn:

This is dystopic and this is going to be a nightmare, but it's because of the context that's in. If it was done in a different context, not oriented towards demands of capital, which is particularly, as you pointed out, it is a systemic, it is overly complex, it is spandrily, but it's also stupid, it does one thing and whereas in a different context, if we got the energy inputs and whatnot under control, that's another thing we have to deal with. But this is not the disaster for humanity. This would actually be a big relief, because no one really wants to do the shitty jobs that a lot of this easily fixes. And one of the things about a machine if they start developing stuff like emotional pleasure or whatever is, you can train it to enjoy what it's doing, which directly, as opposed to like shouldily and indirectly, like you have to do with mammals.

Nico Villareal :

I mean, you could literally just tell it with an LLMZ you enjoy making widgets all day, you're scraping garbage, whatever, and like the concept of pleasure is there with them. They wouldn't experience it like we do unless we could probably adjust how they experience it, but they would never experience it in the same way, but it wouldn't be there.

C Derick Varn:

I mean. And so, like you know, I mean that has its own box of ethical questions if you deal with these machine subjects, if they existed. But it is interesting that like, yeah, but humans would never really enjoy that. As George Rouss will do it, some of us might get a sense of accomplishment from mastering the ability to do it quickly. We, you know humans do.

C Derick Varn:

When they get to directly benefit from their labor, they tend to actually like it, which is one of the perversities of work is that it makes labor shitty. Ask anyone who loves gardening and then works at a garden center. And as a side note, this is one of my big arguments with David Graber in Bullshit Jobs is like he sees a couple of tendencies in capitalism and doesn't try to explain them from either the social reproduction or from the complexity standpoint, which you should. But he also doesn't want to admit that Marxist had a point about alienation. The reason why people feel like their job is bullshit is because they're alienated, not because their job is actually bullshit, because there's functions in the job that when you remove them as we have learned post COVID stuff breaks apart.

Nico Villareal :

So he takes this totally like if my impression is like a very subjectivist, like psychology kind of point of view, that is not very useful.

C Derick Varn:

Yeah, I mean, well, it basically takes the like a person's self experience as a subject as to generously explanatory like which is like no, I mean people alienated and they don't feel like their work is out with them because their work is removed from them. They don't, they don't benefit from it, so why would you feel attached to it in the same way? And, yes, there are some fields that have less of that teaching and medicine and the service fields used to, although lately that is no longer true. So, yeah, I think that's that's this is. I bring that point up because it's actually tied into this in some real way and that, like, like, I don't think anyone really wants to do the drudgery of coding anymore than people want to do the drudgery of carving every fucking handle by hand.

Nico Villareal :

I mean with coding, I mean it's the, it's the black and white, night and day kind of thing. I mean I mean it's terrible. I mean we kind of fucked the whole generation of college students saying get into coding or whatever, because now anybody can do it Like the.

C Derick Varn:

but yeah, I was playing with that with AI and now I can code motherfuckers.

Nico Villareal :

And it's like I like the thing I was talking about with like these simulations for the capital stock and all those kinds of things. I had these. I had been sitting on these simulations for like a year or two. I made some minor improvements, but once I had GPT for I was able to code up a web UI for these in a day, one day. I'd never done that ever before, then never knew how to create a web UI before.

C Derick Varn:

I have a friend who works and who was working in coding and she basically she changed jobs very quickly. She basically realized that she was in the creative end of coding. She was, but still she was like what used to take me nine hours now takes me an hour. Yeah, it's just insane. It's like the work efficiency is crazy and we knew that was coming because, you know, to bring this back to the political economy of it and this investment stuff that you're talking about, we knew that tech firms were laying workers off, not because they didn't need the workers, not even because, honestly, that it was going to save the firm's money in the long run, but that they needed to keep immediate stock prices up, but in the way they did it because they were.

C Derick Varn:

I mean, one thing I will say is tech elite management is probably one of the dumbest forms of management we've ever encountered in human history. They were just arbitrarily firing people and rehiring back and sometimes they got more money than what they fired before and stuff like that. And I was like, yeah, I bet you there's a technological innovation that comes out of this that gets rid of all the low end of this job and maybe all of it over time. And yeah, we did, and we've been selling people on the STEM stuff which, as another side note, I think studying STEM is great and important and awesome. Don't get me wrong.

C Derick Varn:

But, you and I both know the reason why we were pushing STEM was the lower STEM ranges. That's why it was such a big concern, particularly in coding, where there was actually a shortage. But in places where there wasn't as much of an acute shortage the symbology we were actually telling people that there were guaranteed jobs for them. That there were not. So you know it's Well.

Nico Villareal :

It's weird what the knockoff effect of this are going to be, because at one place where there was always a shortage that never got solved was in cobalt coding, which is essential for bank transaction networks and all that kind of stuff. They've been like it's ancient and nobody. All the people who knew how to code in it are dead or retired. But now the machine can read all of the documentation on cobalt or whatever and it can fix all these problems. You know you don't have to be a genius and specialize in cobalt anymore or whatever.

C Derick Varn:

Yeah, well it also, like you know, we've had all this data loss from stuff like the old, the old shuttle missions and stuff that these machines can actually probably retrofit the lost data that we have, because we stored it in a fucking five-inch floppy disk and we don't have any way to read it, whereas we could give it what schematics we do have and this can probably backfill what we have lost. That we can't retro-engineer ourselves, like we haven't had the Actually, in this case, I don't think we can't, I think it's more of a. We don't have the resources at the time to do it.

Nico Villareal :

I mean the really curious, like what this technology like with the neural nets and LLMs. There's a separate thing of it, just like creating chatbots and stuff but it's allowing us for the first time to really understand what an idea is, what a concept is, what a semantic symbol is, because it maps out the correlation of that thing to every other symbol and that tells you what it is in that moment in time. And this allows you to do all kinds of crazy things that we couldn't do before, because it now allows us to. You can do that with, like, proteins and molecules. You can do that with, like all kinds of All kinds of concepts and papers and map out how things relate to each other.

Nico Villareal :

It also like try to Like. It connects things and allows you to create things that you couldn't before. It's making huge waves in like neuroscience and those kinds of things because they're applying these concepts, like these ways of understanding, meaning concepts, directly to the brain and getting some pretty revolutionary results that we go now we can see semantically what a person is seeing just by reading their brain waves or whatever. But I feel like I'm getting a bit too off topic with this, because we're originally talking about AI.

C Derick Varn:

Right. Yeah, I mean, although AI. I think you're right, because what AI is, what large language models have been able to do, is really use computational linguistics in a way that's actually fucking useful and maybe also end a lot of debates and semiotics yes, it's crazy Science keeps doing this.

Nico Villareal :

It keeps like ending these speculative debates somehow.

C Derick Varn:

Right, and you know this is not to say the speculative debates aren't important. That's why we have science fix, like even epiphenominally, that's why it can answer this, because we had the debate in the first place. But this is my anti-Laurence Krausism, just for those of you who don't know. It's like oh, philosophy of science is useless. I'm like, well, it isn't science, but it's actually pretty fucking useful, the. I wanted to end off on this because what I think that we get to at the end here is like we can agree with Joy Monroe under current conditions, this is bad, yeah, but it doesn't have to be. And it may be something that finally prompts us to deal with certain things about capitalism. Because, like you, I currently don't see, you know, the countervailing tendencies to use, you know the gross night phrase really emerging out of this. I don't know what would do it. I just I actually have like other than maybe an actual Butlerian jihad. I'd have no idea what would actually deal with this.

Nico Villareal :

But that's the thing. Is it all that implicitly makes Butlerian jihad the kind of liberal reformism kind of take here?

C Derick Varn:

Oh yeah, it's like starting a war to make things profitable yeah.

Nico Villareal :

It's the. It is. This is all that. What there does exist right now is all these really nasty tendencies to try to like stabilize capitalism out of this, which is like trying to entrench copyright or like I mean people suggest like UBI or some kind of things like that. But the what you really get is, like I mean, for certain situations like the writers are going on right now, I mean, it's already. We're talking about a kind of copyright that already exists. They should try to get a cut of that. That makes sense, right.

Nico Villareal :

But, when we're talking about creating new forms of copyright in order to try and get some of the value that's being created here which, by the way, because of the very nature of these technologies, which aren't at the level of being able to create value themselves, is not going to solve the problem. They're going to destroy value, just like every kind of automation did before. That is not a solution. That's not going to work and it's just going to create a new like. It's going to entrench a kind of labor aristocracy, a kind of this loves war to property relations.

C Derick Varn:

Right, just like cultural appropriation being tied to like like collective claims on cultural intellectual property, which people forget as part of the whole apparatus of cultural appropriation is not just that, it's psychologically bad. Is that like we should compensate these traditional societies for appropriating their culture, which okay, fine in a capitalist apparatus, whatever?

Nico Villareal :

That means you actually own it.

C Derick Varn:

Yeah, but then. But then what you are doing is you're reifying collective ownership in a private context and you're also really making like it creates like. I don't want to go off on the old saw of why cultural appropriation is is a confused topic, but it's actually weirdly similar to what we're talking about here, because I'm like you're not going to fix the impoverishment of indigenous peoples by giving them a copyright over some, some symbols that were stolen by capitalists, when also no one even could claim ownership of these traditional symbols. And when we do try to do this, we're going to we you're actually going to start having like cultural origination wars, because who has the right to claim the, the, the intellectual property, because different groups are not going to realize that there's another group before them, it actually that they were incorporating it, like this whole thing. So so and I think people realize this about the economic argument for cultural appropriation, which is where it came from, I think it was reduced to just the moral one.

Nico Villareal :

In a way, we've kind of gone through the art my article backwards, because now we're kind of ending up at the beginning, which was talking about art and the automation that's going on right now and art and writing and stuff like that. Because like the only, like the things that we could do right now, which is still incredibly difficult to do any kind of working class organizing or any kind of mass movement towards things like reducing the working day, which is probably the most important thing here, because it's the only thing that allows like a kind of like the pursuit of these creative things, like writing and art, outside of a purely capitalist context. It is it's very difficult, but it's the one thing we can do now that can kind of create a real, better future.

C Derick Varn:

Yeah, I think that's actually crucial and I would like to point out my argument contra Keynes one of my many arguments contra Keynes.

C Derick Varn:

Keynes is one of the the retinors of my life. People are like, oh, you hate Keynes? Yes, I do. But one of the things, one of the things I point out against Keynesians is Keynesians really did take the kind of naive liberal notion that the innovations in capitalist production would lead to labor reduction. And Marx points out 100 years prior, and correctly, that that's not true. That is never what this does. In fact, usually it increases labor exploitation and labor hours, although in a different context it would lead to labor reduction, right, and that does bring us back to where we began, like as an artist or as a teacher. Right now, large learning models are a problem for me, but as I play with them I could see in a different context how I would absolutely fucking love them. And that tension has to be addressed. And techno-pestimism or techno-optimism neither one of them can address that tension, right, right. As much as I like to pretend to be a loadout and talk about smashing smashing the machines, I, like you, also don't really believe in a butler and jod Like.

Nico Villareal :

I mean the whole point of the article this was my thesis is that the real thing at issue here is not some interlogic of the machine, it is about class struggle. That's why I had that other quote from the Communist Vesto all history being history of class struggle. Because that is the history. That is going to be the history of AI. It is not going to be the history of some weird algorithm overturning the whole world, to fit some kind of idea of like to turn the whole world into paperclips or the shape of paperclips or whatever. That's not what's going to happen. It's going to be how this technology is created to reproduce society, how it is going to be used to reproduce society.

C Derick Varn:

Yeah, I think that's a great point. I mean, that is huge to think about. To put this back into this the problem is not the logical machine. The problem is not machines not unfolding logic. The problem is class society and the dynamics of social complexity mixed together, and particularly in this instantiation, and capitalism, and that is the primary fight and that's why this is dystopian. But to make it about the machines is a mispoint, to summarize your argument.

Nico Villareal :

So I'm going to look at, we shouldn't forget to actually think about the machines. We should think about the machines, but don't think they're going to change the world according to themselves.

C Derick Varn:

At that note we're going to end the show, but I'm going to link your two articles and actually I'm going to read also two other articles that we have mentioned Simon McNeil and an article by Ian Wright as well in the show notes. So both Nico's articles and the articles that he's both either incorporated will be available to you guys. You can follow along and anything else you'd like to plug.

Nico Villareal :

I mean my CASPerform website on political economy. Check that out. Besides that, you'll see the links in the description, the blogs I've been writing, yeah.

C Derick Varn:

Awesome. Thank you so much.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Regrettable Century Artwork

The Regrettable Century

Chris, Kevin, Jason, & Ben
The Antifada Artwork

The Antifada

Sean KB and AP Andy
The Dig Artwork

The Dig

Daniel Denvir
WHAT IS POLITICS? Artwork

WHAT IS POLITICS?

WorldWideScrotes
1Dime Radio Artwork

1Dime Radio

Tony of 1Dime
Cosmopod Artwork

Cosmopod

Cosmonaut Magazine
American Prestige Artwork

American Prestige

Daniel Bessner & Derek Davison
Machinic Unconscious Happy Hour Artwork

Machinic Unconscious Happy Hour

Machinic Unconscious Happy Hour
librarypunk Artwork

librarypunk

librarypunk
Knowledge Fight Artwork

Knowledge Fight

Knowledge Fight
The Eurasian Knot Artwork

The Eurasian Knot

The Eurasian Knot
Better Offline Artwork

Better Offline

Cool Zone Media and iHeartPodcasts
The Acid Left Artwork

The Acid Left

The Acid Left