
Varn Vlog
Abandon all hope ye who subscribe here. Varn Vlog is the pod of C. Derick Varn. We combine the conversation on philosophy, political economy, art, history, culture, anthropology, and geopolitics from a left-wing and culturally informed perspective. We approach the world from a historical lens with an eye for hard truths and structural analysis.
Varn Vlog
Signs, Symbols, and Silicon: How AI Changes Our Understanding of Thought with Nicolas D. Villarreal
What makes human thought distinctive, and can machines ever truly think like us? In this profound conversation with Nicholas Villarreal, author of "A Soul of a New Type: Writings on Artificial Intelligence and Materialist Semiotics," we journey into the heart of what makes intelligence possible—through the often overlooked lens of semiotics.
The discussion begins with a critical examination of how we conceptualize both human and artificial intelligence. Villarreal challenges the dominant frameworks used by today's AI rationalists and humanist critics alike, offering a materialist semiotic approach that provides startling new insights into the nature of thought itself. By exploring how signs and symbols function as the building blocks of cognition, we discover why current AI systems simultaneously impress and disappoint us.
Rather than seeing artificial intelligence as either a potential godlike superintelligence or a mere statistical parlor trick, Villarreal guides us toward understanding AI as a different kind of intelligence altogether—one that interacts with the entire semiotic field in ways fundamentally different from humans. Where human understanding is shaped by individual experience, trauma, and desire, large language models neutrally absorb patterns across the entire spectrum of human communication.
The conversation takes fascinating detours through the philosophy of mind, the nature of narrative, the failures of linguistic policing, and even the unexpected ways social media platforms have trained us to interact with each other. Throughout, we return to a central question: how might semiotics help us create technologies that enhance rather than diminish our humanity?
Whether you're fascinated by artificial intelligence, cognitive science, philosophy of mind, or political theory, this conversation offers fresh perspectives that challenge conventional wisdom and open new avenues for understanding both technology and ourselves. Join us for a thought-provoking exploration that may forever change how you think about thinking.
Musis by Bitterlake, Used with Permission, all rights to Bitterlake
Crew:
Host: C. Derick Varn
Intro and Outro Music by Bitter Lake.
Intro Video Design: Jason Myles
Art Design: Corn and C. Derick Varn
Links and Social Media:
twitter: @varnvlog
blue sky: @varnvlog.bsky.social
You can find the additional streams on Youtube
Current Patreon at the Sponsor Tier: Jordan Sheldon, Mark J. Matthews, Lindsay Kimbrough, RedWolf, DRV, Kenneth McKee, JY Chan, Matthew Monahan, Parzival, Adriel Mixon
Hello and welcome to BarnBlog, and hopefully you guys noticed a quality difference. The last four or five episodes that I've been recording that will be released had issues when I had to upgrade some equipment and then my streaming service added more AI functions weirdly relevant to today's discussion to my service, which actually caused my memory to make it look like my internet was bad and that actually wasn't the case, but it was a problem in my memory. I have now built all new equipment, boo, I guess this is why we have Patreon, so that this should run a lot smoother. That doesn't pertain to anything other than the mention of AI, but I have Nicholas Villarreal or D Villarreal, I have to remember, because there's apparently you have a super common name that I only discovered was super common when I started looking for my post of you and realizing that there's like five other Nicholas Villarreal.
Speaker 2:Exactly.
Speaker 1:That's why I had to include it. So, nicholas D Villarreal and we are talking about a book that I have in my hands for those of you who are listening, it is orange and self-published, but it is quite nice and also I endorsed it on materialist, a soul of a new type Writings on artificial intelligence and materialist semiotics, which takes off on a conversation. We had a couple articles that you wrote way back in the day for Cosmonaut about two years ago, and you developed it out dealing with more structural semiotics and some discussions about artificial intelligence. Now I want to say LLMs as a form of artificial intelligence have actually underwhelmed me more than when we started talking about this two years ago. But I'm not sure that is because of the nature of the technology itself, but how it is being program, trained and used. So you know we will get to that.
Speaker 1:But I think what is more important is that you started looking at neurology and structural semiotics, and semiotics, unfortunately, is a field that I used to say peaked with Charles Sanders purse. I used to say Pete with Charles Sanders purse. For those who don't know, that's almost 200 years ago. He's also the inventor of pragmatics, which is not the same thing as pragmatism. Despite people considering him a pragmatist, he was actually probably considered himself a weird analytic Hegelian of the 19th century American type. But we're not talking about Charles Sanders' purse today. We are talking about all kinds of semiotics. So I wanted to ask you, because I do feel like semiotics got associated with, like Roland Bohr and Umberto Eco, who I liked. I studied Umberto Eco in college. I was actually kind of obsessed with him for a long time. He was his book on semiotics was my primary introduction, mine as well but really kind of got sucked into what I like to call the abyss of Team France, where what I like to call the abyss of team.
Speaker 1:France, where it got incorporated into questionable uses of people like Lacan and I don't and I'm not referring to Lacan himself, I'm referring to people using him and stuff like that and I think it led to semiotics as a massive, you know, insight into cognition and linguistics not being really taken seriously enough because it was associated with french theory, um, even though it's been kind of plugging along um, and even stuff like computational linguistics and and uh, large intelligent models and stuff this entire time, it is still crucial to discoveries being made today on the intersection of language, neurology and machines and it's kind of a um. I consider it one of the things the cyberneticist really missed back in the day, because they based all their um cognition theories off of behaviorism and the ideomotor effect for a long time and then when they realized it didn't apply to people, they just went and fucked with machines, um, uh, which is kind of an unfair stereotype. But there's a reason why, basically, when leftists go to deal with cybernetics theory, they kind of stop with theory from the 70s and go backwards. They don't look at theories from the 80s, 90s and aughts, because there's problems there.
Speaker 1:And the other thing I will say about behavioralism is behavioralism is an interesting case study or a paradigmatic problem for us, because behavioralist interventions do actually work a lot of the time, but they are also 100% based off a faulty theory of mind, and so it's kind of a problem for us because we know that behavioralist assumptions about the mind are false. But we also know behavioralist interventions in pedagogy and behavior corrections and stuff like that actually work. And I've always tried to compare it to, like um, the Ptolemaic calendar problem and people don't get it. But I actually got this from Umberto Eco who pointed out in an essay uh, in the eighties I believe that that until almost the 20th century the Ptolemaic calendar was more accurate than any calendar based on Newtonian mechanics, and I was like, oh, that that's, that's a problem. That's a problem for a lot of like. That's a problem for a lot of pragmatic epistemology, if you assume outcomes are actually a criterion of truth claims, because if that's the case, then why did we need to abandon the Ptolemaic calendar for such a long time? Eventually the outcomes did get better, but it took Einstein to fix that.
Speaker 1:So into your book and finally, with all that preamble and me talking, I'm excited about this. This is a topic I don't cover on the show and you actually do kind of tie it into how and why this might be important for politics, which is, and not in the lame linguistic justice, language policing way that a lot of people try to think about. So what semiotics did you find most useful when you went into this book?
Speaker 2:Right. So I really did lean on that Umberto Eco book that I probably quote that one line from him too much in it because it was originally meant like the line about in order for something to tell the truth, you need to be able to lie, or something to that effect. I use that too much because originally each of the essays was a standalone and it only mentioned it once. But I really basically go from him because he immediately connects it back to cybernetics and like, even like developments in AI that were going on in his time, which was mostly symbolic AI, but the, the, the ideas like, like I instantly saw in his descriptions of like. Well, if you could build, like a machine that could do like the semiotic analysis of all the connections between words and stuff like that, and then use that to like, like, extrapolate from that, you could build a machine that could create human language, produce human language utterances that were meaningful and that's what LLMs are.
Speaker 2:Umberto Eco called that back in the 70s or something. And to back up even a little bit further, my interest in semiotics really started after that original essay I wrote for Cosmonaut and in some ways that led into it. But what happened was that I was at an event about AI. It was hosted by Palladium and there was a lot of rationalist people there and ex-rationalists.
Speaker 1:Scott.
Speaker 2:Alexander types, yeah, those type of people who are all over Silicon Valley. And for those who don't know, this originally started as an internet thing, not connected to any historical schools of rationalism necessarily. It really pulls from certain utilitarianism streams 1950s decision theory from von Neumann and Nash and that sort of thing. Um and uh talking to those people who, like, they're basically the source of talking points about why we should expect that an AI should focus in on certain goals and want to do them to the expense of everything else in the world. Their uh theory of mind, which is like that there is an ideal rational type of mind, um that uh like is basically um described by these formalisms of decision theory and um and utilitarianism, uh, which kind of gets around. Like this came up in the stream with cosmonaut um on the topic uh that like marx, marx's critique of bentham um, which was that like it's so obvious that like these utilitarian is utilitarian ideas are just like uh reifications of, like the existing social values that they they're not trans, historical and stuff like that. And these, the current day rationalists kind of approach that obviously see this problem and try to do basically the impossible and try to like from first principles, create these set of values, this sort of mind which um does, doesn't have this problem, which has, like these, certain trans-historically good utilitarian values, um, and that, basically, is what launches like I, like I wanted, I felt like these people, like they were missing something like that, like there was an approach that wasn't being taken in all of this. And I knew Althea's connection to structuralism it went through structural linguistics and I felt that if I dug deeper into that, into the science of signs and symbols, I could find a better answer to all of this. So that's basically what I did. I tried to investigate in that direction and I think you're right about like the direction that semiotics took.
Speaker 2:Obviously I'm not an expert in this, like history of like thought here, but my impression is that Althusser and Lacan had a pretty good idea of how semiotics work. And I just read a book, like two books by Lacan recently, which gave me this impression, although he does kind of doesn't go too deep into it necessarily Like the nitty gritty of Simeonics, kind of like more cliff notes a bit of it, although there are some finer details that he does draw on. And Althusser is basically the same way. But once you get to the post-structuralists. They basically are no longer focused on like all those things that in Porto Ago talks about of like how semiotics is like, like signs are created by cutting up a certain continuum of possibilities, which are then connected to other things that stand in, like a sign is something that stands in for something.
Speaker 2:And what the post-structuralists basically do is they think of signs only like as a chain of signifiers and they stop thinking about, like the materiality of those signifiers. I think they become very idealist in their use of semiotics and because this is, I mean, which naturally lends itself to English departments and things like that, which is where a lot of this ends up, but it's not very useful for many other things.
Speaker 1:Yeah, I'm going to your book made me think of a weird adventure when I was still an academic way back, and this was, I guess, now this was 14 years ago, in 2011. Um, I went to a scholarly conference on narratology and I was actually talking about the Korean American artist. Uh, uh, uh, I was actually talking about the Korean-American artist I'm trying to remember her name, right, teresa Kakunche, and her book Dictate. But I was talking about narrative patterns and the breach of narrative patterns and other symbolic references within her work as a critique of both anti-nationalism and nationalism in the Korean experience. Blah, blah, blah.
Speaker 1:Okay, fine, but I was also interested in scientific discussions of narrative, which, for this year, was pretty hot at this conference, but most of it was based off of either A speculations about evolutionary psychology that could not be proven and didn't really try.
Speaker 1:It was, you know, these were English majors dipping into Evo psych literature, then Evo psych of narrative that also thought that evolution and markets were basically the same thing, very similar to these rationalists that we're talking about. And I remember also at this time this is what I was reading scott alexander at the time, and I and I read less wrong back then and I turned on both of them, uh, both those think uh group of thinkers, and particularly thinker, because they were just philosophically on a droid about what like different kinds of reason, learning logics that didn't fit basic predicate logic. And they tried to get around that by going into game theory and then making game theory somehow more universal than it is. I find game theory very useful for dealing with, like social competition, but I don't apply it to everything. And then you also got that they had an economic or maybe even an economistic view of rationality, where rationality is the maximization of preference. And how do you know that? Because you did a thing, therefore you maximize your preference, which was perfectly circular and uncriticable.
Speaker 2:Here's the thing I spent a lot of time in the book getting to these, why they thought this exactly, which was because Eliza Yudkowsky, who's one of the main figures in all this, calls himself a materialist, a productive materialist, at least back in, I don't know, 2008 or 9 or something.
Speaker 2:He used to at least yeah and um the I like the reason that they say that goals need to be like maximizing something in reality. It's because I say, if you don't do that, you're potentially like leaving yourself open to dominated strategies where, like the thing that, like the things that you want, like you're leaving them on the table, that like you're not getting the exact reality that you want. And the problem. I've really had to rack my brain to come up with a very specific critique of this, because what they were refusing was the idea that goals could be expressed in more complex ways. Specifically, I use in the book the term higher order signs. So first order signs are things that are basically proper names of things that refer directly to something in reality or some level of reality. Like the example I give most commonly is that you take the phrase apple juice and apple juice, when you're referring to, like when you know what apples are and you know what juice are, and you combine those two things together to get to figure out what apple juice is, your understanding of apple juice is a higher order sign. Um, because, as you're combining, it's referring to two other signs to inform itself when you're, when you think of apple juice only as like a phrase or word that stands in for this particular golden, sweet liquid that you know by experience or whatever. That's a first order sign and what I elaborate very clearly. It's like you can say that even very abstract things can become like first order signs if you think about them that way.
Speaker 2:Like I give the example of temperature, which can be a first order sign or a second order sign depending on how you're treating it, a lot of scientists will treat temperature as a second order sign because they're approaching it from all these different theories as well as like empirical measurements and whatnot. Most people in everyday conversation will probably treat temperature as a first order sign essentially. And the thing with the rationalists is that they insist that goals can only be first order signs because if you're doing anything else, then you're leaving. It's not rational according to their definition, and the problem with that is that lots of important things are higher order signs. Like if you wanted to make something that doesn't exist right now, it would necessarily have to be a higher order sign, as you are like articulating it as a goal for yourself. So if you want, if you're a scientist trying to create a theory of everything, that theory of everything only exists as higher order signs. There is no thing you can point to and say this is what I'm trying to create.
Speaker 1:In common language, a higher order sign would be an abstraction.
Speaker 2:Basically, yes, I wanted to more rigorously formalize it, but, yes, but I wanted to more rigorously formalize it, but yes, it makes sense, though, because almost all abstractions sometimes refer to a force through the sign as well.
Speaker 1:This is why common language philosophy and technical philosophy are always at odds. For example, I love Wittgenstein for talking about the way people actually interact and understand language. Language philosophy and technical philosophy are always like at odds. Like, for example, I love wittgenstein for talking about the way people actually interact and understand language. I do not use wittgenstein for semiotics, because it's not going to help you very much. Yeah, um, except for the idea that often a sign is as much defined by what it excludes as what it includes and then, and that you get a mix up, but you also get it in the sassur, so you get it all the way back to the origins of linguistics.
Speaker 1:So you know, I think it's important to kind of deal with this, because you know, when you have this, I'm going to use a horrible Umberto Ecoan phrase of, or something, but when you have this or rationality, that you're dealing with these modern rationalists who are also very cut off from historical rationalism. They're not like English rational empiricists, they're not like logical positivists empiricists, they're not like logical positivists. They kind of fuck with Popper sometimes, but only usually politically not actually dealing with his theory of science. Semiotics, probability makes their brain ache, even though they work in math. So it's, I feel, like they're the number one enemies. And yet I would also say, when it comes to stuff like this, what we're talking about with AI and semiotics right now, it's like them and continentals that I think are spouting garbly gook that we have to deal with. And I wanted to say I'm not actually talking about Lakhan here.
Speaker 1:I don't love everything Lacan has to say. I do think Lacan's guilty of occasional garbage, but on semiotics he's actually kind of he's basic but astute with what he is basic with. Like I've gone through it and I'm like, oh yeah, that that actually makes sense. And I think Echo's an interesting figure If you read echo's popular essays, because echo is not like a hard rationalist, he's not trying to do what charles sanders, uh, purse, was trying to do, but he's also not a post-structuralist who treats signs and signifies in this heideggerian language mess of authenticity and deconstruction. I mean, that's not what he's doing either.
Speaker 1:And I actually think semiotics is important because, to go back to Steven Pinker's book on language, which I know that we're all supposed to hate Steven Pinker and I do hate a lot of Steven Pinker, but his book on language is not bad. But he actually says human beings don't think in language in like a Sapphire Wharf way. We don't think in language as like, we experience that as language, but we don't actually think in language in this way where the language that we speak, like English, is encoding our brain like a computer program. But we do think in signs that way. So, um, uh, and while I'm pretty critical of things like chomsky and universal grammar, I actually do think if there's a rational core to it, it lies in semiotics and the way we process signs and what signs are for.
Speaker 1:And umberto echo actually is up there with me. Um, I came to this simultaneously, but I love that part of the book where I think language was invented so we can mislead each other. And I actually came to that from studying anthropology, because our body language is really hard to lie with. It's like like to. To train yourself to lie with your body is a is actually a massive intellectual undertaking. To train yourself to lie in words is harder than people think. It's not just saying untruths. You do have to mask some stuff and there's some body language stuff you got to do, but it's way easier.
Speaker 1:Um, and I think if you want to understand what makes human beings human, and while I would say there's some interesting research about other primates and large ocean animals, it does seem like cestations, like whales, actually do have cultural language differences and stuff and structures and we don't understand them, but they do seem to exist, um, uh, but there's not a whole lot of animals that we understand well enough to think to see if they actually think in science this way. If they it, usually we have to start looking for reflexivity and self-reflectivity to see that and you get that from like elephants, dolphins, whales, but then other animals that you'd be kind of surprised of all corvids, magpies, crows it's not even necessarily totally obviously related to brain size. It's not even necessarily totally obviously related to brain size. So I find that interesting and I think that's a helpful way to think about it.
Speaker 1:If you think about not language, as in the phonemic structure that we happen to speak in and the relevant signs that we tie to that, to that that usually are either syllabaries, phonemes or ideograms, and if you don't know what those are phonemes is English, syllabaries is Korean language, like Hangul, and ideograms are Chinese, are hieroglyphics or whatever. But there's a deeper symbolic symbol structure that we have and without that human thought is really incomprehensible. We would not be self-reflective without this sign stuff.
Speaker 2:In the book. I have this critique of Chomsky briefly. We all love crit. Critique of Chomsky briefly. We all love critiques of Chomsky here yeah.
Speaker 2:Well, it's related to this, because it's like I think Chomsky was partly right what he was wrong about was specifically the universal part, but also like, also just that notion that humans are treat language like a computer, like executing, like manipulating symbols with these specific rules and stuff like that, which is something that people do on occasion and even do to create sentences sometimes.
Speaker 2:But that's not the norm of human language. And a lot of the and the operations they do aren't just the operations of grammar, it's pretty much any operation that humans can think of. They can manipulate language in a certain way, um, but what, like, humans usually do is not on like the level of grammar, um, because humans will will learn grammar, learn grammar, we'll learn. But we'll also learn like specific sentences and specific phrases and like vocabulary that we just say is like to express something and we'll repeat like stock phrases, even when it's wrong grammatically, putting them in a certain context, because they're familiar with us. If people approached language the way Chomsky thought we did, it would make a lot of errors that were awkward in a way that humans don't do. The errors that humans make tend to be familiar in a lot of ways they're not like computer errors.
Speaker 1:Yeah, I mean, and we have to remind ourselves that when I think of language and I think of language as a subset of semiotics actually but when I think of language, we're dealing with three different things Semantics, syntactics or syntax. And then I add dialogics, which is from Russian linguistic, soviet, specifically linguistic anthropology, which is like the history of a word contains all the utterances, and those meanings can come back up. And so when you say a word or you say an utterance or a phrase, actually there's a history of use that you are invoking, that you don't necessarily need to know, you probably can't know all of, but all of it could come out in interpretation on somebody else's side. And so I think of dialogics that way. But then I add, like semantics is when we're is. You know, semantics is one of the reasons why we need dialectics, because of how do you decide what fucking words mean?
Speaker 1:Um, and when people say words have meanings, and I'm always like, yeah, they do, once we agree that they do, and we, and as long as we agree that they do, but the moment that words are so contested that they don't uh, we, we don't agree on their meaning, they don't have a meaning anymore, um, and that really bothers people, but it does seem to be the way things are. That's semantics and then syntactics. That's really rare. I mean that's semantics and then syntactics. That's really rare. I mean, like, when I think about, like, the refinement of language, that's mechanics, that's grammar, that's the meaning implicit in those structures. And you know, I mean one of the things that we learned from chomsky's universe, like universal language and this is from him, this is not me critiquing him, this is me accepting his theories is he kept on having to reduce what was universal about it? Like you like, if you look at his long literature towards the end of that of his writing on that topic, you're getting down to like very small structures of language.
Speaker 1:And then the other the other issue that was made by chris knight, I agree with it brackets out sociality, are Tomasello's critique of Chomsky, which is like, well, in emergent systems this might actually just be actually caused by our brain interacting with external stimulus. I am not actually sold on Tomasello's answer, but I do actually think if you just have Chomsky's theory and Tomasello's theory, they both perfectly explain languages that exist. Good luck. Well, I say perfectly. They both imperfectly but coherently explain languages that exist. Good luck. You couldn't adjudicate between the two, and I do think the semiotic stuff actually gives you a way out of thinking that you know. This universal grammar program is what we're actually dealing with and it works like a computer program. And I mean thinkers have been trying to do that ever since Boolean. For those of you who don't know, boolean was thought the guy who wrote Boolean code that we used to use for search engines, very useful for that thought he was describing human cognition.
Speaker 2:He was not at all not at the level that he thought he was, um, but there is like a sense that they're a kernel of truth to it, right, um, which is that like um with.
Speaker 2:Well, there maybe there's some quantum shit that is not like the true, false or whatever um, but it's like the, the basic, like there is a sense that boolean logic is related to, like the logic of the signifier in a signifiedified in a way, which is the connection that brings it back to cybernetics.
Speaker 2:And I don't know if you read the little aside on perceptrons I put in there, which kind of gets into that the perceptron is built off of very this, very simple. There's not, there's not that much logic involved. There's a little bit and um, what it does is it's it. It's basically the key to all of this, because the perceptron allows us to go from boolean logic, um, to the logic of the signifier and also because of things that we know about how neural networks work, you can go from the perceptron back to Turing machines. With a multi-level perceptron you can create a Turing machine, and this was one of the important things of establishing this connection between the amount of signs and how much signs. You know how much signs you can create in stored memory and intelligence.
Speaker 1:This is kind of the connection between all of that. So in your book you assert that intelligence almost necessarily is a higher order sign in almost all instances. I mean, there are some times where we would talk about intelligence of an individual, where if we bracket the intelligence of an individual we might kind of be talking about a first order sign. It's a little bit vague even there, but I think that creates an interesting problem. What are we talking about when we talk about intelligence period? Because it is not exactly a lot of people associate intelligence with qualia, which is probably not. I mean, qualia is cool, I like qualia, but like, for those of you don't know qualia is, it's all person. But for those of you who don't know what qualia is, it's all the subjective experiences that you have that are in forms of essences of words, arbitrary, so red. We know it as a qualia, but logically it's arbitrary, it as a qualia.
Speaker 2:The way I understand it is it's like a pure phenomenal experience of a concept or whatever arbitrary thing.
Speaker 1:Yeah, I think Go ahead, yeah which is funny because a lot of rational. I remember the great debate in the rationalist community around 2011 was they were trying to say qualia didn't exist and I was like, okay, but go ahead.
Speaker 2:Well, I guess, like from a materialist perspective.
Speaker 2:I kind of dip into this a little bit.
Speaker 2:In some of the essays, um, I don't know if I I don't think I specifically use the word qualia, but like the there, anything that there's that whole thing of like do we know what a bat, what it feels like to be a bat, and in a lot of ways we kind of do. We, like humans, can even do like echolocation a little bit on a small scale. Obviously we don't know what it's like to be that tiny and fly through the air, but any kind of experience that you could have is like a materially encoded thing, um, and if you can replicate that material encoding um and uh, even if it's like in a broad sense, um, you can get um like an approximation of that experience, and so like with AI and LLMs. I think that, like, llms obviously don't have a phenomenal experience. Particularly they don't really have like sensors that could input like any kind of experience like that. They also don't really have memory either, sensors that could input any kind of experience like that. They also don't really have memory either.
Speaker 2:I mean, they kind of do it's a little funny they probably could.
Speaker 2:This is something that's probably not too far off is that we could have a machine that does have sensory experience and then also has this semiotic experience of like, uh signs that they associated with with those things. Um, I think that's probably coming um, and I think that a lot of um like this is the thing that really uh grinds my gears with the like the humanities right now on the left. It's all of these polemics about how like the basically either like are just pointing to, like the lack of qualia among machines, or like just saying that there is some other sort of like human essence that they keep very vague of what almost vitalist essence, yeah, and I think it's just.
Speaker 2:It's very immensely disappointing the level of analysis coming from the humanities and the left on these things that I wish that like that this book, like the things I'm talking about semionics here should have been said by someone over five years ago, longer than that.
Speaker 2:Like someone should have written this a long time ago or something probably better written, because I had to like stumble through learning all this stuff to write it and because of that legacy of the post-structuralists and we've talked about this before that there was nobody really in the humanities to know these connections between semiotics and cybernetics and AI and all that stuff. And there was even I had this whole scrap on Blue Sky the other day of this guy who did have cybernetics in his bio and was a humanities like academic, uh guy at some university, um, who wrote like one of those articles of like, um, that ai lacks all these certain things and we should, um, and it's very reductive really just relying on those cliches, um, and it was absurd to me like how could someone who takes cybernetics seriously do that? Well, there's also there are traditions of like using our cybernetics in like a purely artsy type of way, um, which is also common. Not that common, but is a thing. Um, and it's just why isn't anyone taking this seriously? I don't, I really don't understand it.
Speaker 1:Besides, like the obvious reasons, well, I mean I some of it is a lack of expertise in the necessary fields. I mean, one of the problems that you have when you deal with cybernetics is like it's like historical materialism, in the sense that the number of fields you really have to master to really actually talk about it meaningfully is hard. But you don't have to master a lot to talk about it unmeaningfully and sound like you know something. Um, so you know, like how many people did I ask you know? When people like that's not dialectical and I'm like, well, what do you think dialectics is? And I go through the entire history of the term Like it meansfisted form of a Classical dialectic. But you see this form develop In India, you see it develop in China, you see it in Tibet, and Tibetan Buddhism has higher dialectical Traditions. And dialectics Changes meanings in the 18th and 19th century With German idealism. But it wasn't because its form was changing. It's because it was trying to argue that, instead of having individual interlocutors debate this and the best debater wins and sets the term of the definition, instead the way you get past what leads to obvious aphoras all the time. Aphoras are like where you can't make a decision in any meaningful sense. Really you don't know um is. Uh is to have history be what decides the conceptual. Now it has its own huge series of problems and uh, you as an officer in and me as a whatever the hell I am, but uh, not that, but also not.
Speaker 1:People think I'm a Hegelian. I'm not. I just think that's how Marx thinks that, that, that, that that's a way out of this problem. One of the things I do like about semiotics is that it it deals with structures but I, unlike a lot of structuralist forms of philosophy, it does not dehistoricize them. Like, structures have a history of relation and signs, if you mean. This is very clear if you read in berto echo's work, like, it's like we have these base structures and signs. When we talk about what these signs meaning and their procedural cultural iterations and the way they developed, you're going through long histories of of both artistic representation and material fact.
Speaker 1:And you know, sometimes I think Umberto Eco plays that like jazz and it's a little free and loose with it, but that's still what he's doing and that's really important for coming to something like like, when we talk about like, what an abstract intelligence looks like, because in some sense it's encoding, because it is based off of an explicitly human intelligence but does not, at least not yet, think like a human human. It's going to come to some interesting cul-de-sacs that I don't think the rationalists that you're debating with in this book kind of can explain. I mean, this book has like a double polemic On one hand you're fighting the rationalists, the small r, less wrong kind of rationalists, and on the other hand you're fighting this like left, namby, pamby, ism, um, that like is almost vitalistic, like you know. Now I will say I'm not sure that, even if, uh, artificial intelligence became a, a general, uh, a general, uh, a general, a general, a general AI, that it would think like a human. I'm actually not sure about that, you know you know, go ahead.
Speaker 2:Well, you know, it's actually funny because after I finished the book and I started reading some Lacan for a different thing, I realized that there was one thing that I kind of hinted at but I didn't actually articulate in the book, which is one of the other distinctions between like a human intelligence and what like a potential like LLM sort of intelligence would be like. Sort of intelligence would be like. Because right now the LLM is basically absorbing the whole semiotic field, or at least the whole that's available to it, much more than what humans could absorb, and it's doing that in a neutral way. It's trying to get all of it and like predict it as best it can.
Speaker 2:But humans don't look, approach the like, the social semiotic field in the same way, because our attentions are closely connected to one our own perspective, our own desires and like, um, our own traumas and pain and what have you, and that shapes how we learn what things mean and like that's that's the thing that I got from the con he didn't say it in those words, but that's like what he was talking about it's realized well, that the way that trauma affects people psychologically that's what's going on is that they are learning things like an objective knowledge within them in a certain way subjectively, because of, like the way they're focusing on different parts of the semiotic field around them and potentially you could get a machine to do something similar if you got it to learn things at a slower rate, like in the way that humans do. But that's not the way that LLMs learn language and symbols and whatever.
Speaker 1:So, basically, to go back to Bakactine a little bit, it would have access to the entire dialogic spectrum of all utterances If it, if it was actually comprehending what it was taking in, um, which we don't you know, uh, and it would. I mean it might learn. It might learn trauma responses and stuff. I don't. I actually have no idea how and how the affect would emerge with this. I assume it would, but I don't know what it would look like.
Speaker 2:Well, the point is that it's not just like a trauma response. It probably knows examples of how people experience trauma and could re-articulate it a certain way, but what it can't do is shape its own set of. It can't forget everything that it learned um, because, like and have and create these connections between symbols that aren't just like the, the represented in, like the, um semiotic field as it exists, um, it can't have an idiosyncratic understanding of things that develops from humans, right Like when we experience things in very particular, often odd ways.
Speaker 1:I mean. So basically, in my way of articulating this, to have the entire semiotic field is to have human knowledge or human experience in aggregate, but not in experience. So it would not, I mean, even if it had qualia, because if it had sensory inputs and I know someone's trying to do that then that does not mean that its understanding would be the same as ours, because it's not learning things. I mean honestly, to use the stupid Buddhist metaphor, it would have access to the whole elephant where we're feeding on trunks and stuff, which doesn't necessarily mean that it would understand everything better, but it would have a more complete picture and that would actually in some ways make it hard to articulate to us what it's trying to do. If it could do that, Right.
Speaker 2:So when we interact with a chatbot now, it's always developing sort of pseudo-personalities.
Speaker 2:And the reason that I say they're pseudo personalities and not just personalities um is because, like this is the one of the examples that lakhan got me going on is that um, like, if we imagine ourselves in the position of a table, um, we can adopt like all the semiotic associations and correlations um of the table and have that inform how we speak and act.
Speaker 2:But we can't have like this particular position of consciousness of the table, because we have not experienced the table and allowed that experience to shape our own internal understanding of like correlations and and associations of symbols and whatnot. So there there are like authentic positions in semiotic fields um, more or less um, and sometimes like an ideological interpolation can put you into those positions, like you can be, like when you become a christian, that changes how you see the world and going forward. It will shape um the correlation to connections that you make. But there's also like inauthentic positions of, like the, of just taking this arbitrary standpoint. And that's basically what LLMs are doing is that they are taking these arbitrary standpoints in the semiotic field because they're interpolated into them but they don't have like the consciousness, like the historical consciousness that would come with it.
Speaker 1:That would require, like active learning and like over time, in that specific position, I mean this has a lot of implications, I mean, but it both makes a lot of the humanistic. Oh, llms could never and I'm not sure again the LLMs can, but I am I do think something might be able to, uh, approximate intelligence in a way that is like, but not the same as, a human being, um, and would be effectively sentient, in the way that a human being is with or without qualia. And that last part throws people off, because one of the reasons why the rationalists are is with or without qualia. And that last part throws people off, because one of the reasons why the rationalists are trying to say that qualia didn't exist which clearly I think it does, but because it does lead to a lot of nonsense, woo crap to be like like, um, uh, you know, like I, there's just some like things.
Speaker 1:I'll give you an example. You go back to the bat. What is it like to be a bat, famous essay, blah, blah, blah. You know, it is true that we cannot know exactly what it's like to be a bat, but we can know the material inputs to replicate the parts of it we can, and what we can't we can know by analogy. So, while it is an imperfect knowledge, it is not nothing like. It is not like the badness is just totally foreign to us as a concept and we could never even begin to approach it.
Speaker 1:Um, and I do think this has pretty big political imports, because one of the things about contemporary liberalism in its turn away from universalism which I admit, has its own problems, we can talk about it, but it's gone to such a radical subjectivism on this stuff that makes it seem like, well, if you can't know someone's qualia, you can't know anything about them, and and I'm just like that's nuts. And then they make another jump and standpoint epistemology does this a lot, that that knowledge of qualia is also somehow structurally significant, so that if my knowledge of the qualia of oppression as a Black person somehow also gives me knowledge to the structure of oppression which there is no reason to assume. That, like, why would? Just because I experienced being burned by fire doesn't mean I know how fire fucking works. Like I know that it burns me, I know what that feels like, but I do not know the inherent chemical structure of fire from that experience and I don't know why the humanists do this. I really don't.
Speaker 1:It doesn't. It's not a logical deduction, it is sort of a rhetoric game, like it makes everything more alien than it probably is and it makes it. It does do this thing where, basically, if you cannot have true one-to both experiential and structural knowledge of a thing, you must pretend like that thing is somehow an in itself, in such a way that you can never know it. But then liberals don't take that to its logical conclusion, because the logical conclusion of that is no human being can understand what it is like to be any other human being at all.
Speaker 1:Um, like, uh, because, like, once you start breaking down the cat to the categories that they seem to think race and gender or whatever they seem to think are super important for this, um, uh, then you, you basically like, if you follow, follow it through, it would logically lead to a radical individualism on the verge of solipsism. Not only can I not know what it's like to be any other being, I don't even know why I should assume they exist. Like if you follow this out to its logical conclusion and people go that's absurd. Well, reductio ad absurdum are a test to see if an idea should make any sense, like there's a reason why that's absurd. But it is implied by a lot of this, the way we talk about a lot of this stuff, which also leads to what I think these rationalists countermeasuring it in ways that are also nonsensical, that like, oh, you think you experience qualities but you don't.
Speaker 2:I mean, you're exactly right that what you were saying um a little bit ago, that this is the book, is like an attempt to uh fight a two-front war, basically, uh, with the rationalists on one side and these like more woo, these more woo leftists on the other, that sort of problems. I don't address it directly in those terms, but it's like what the debates about epistemology are in that essay on materialism. What is Lenin fighting against in materialism and imperial criticism? Um is, like he like these ideas, like the neocontians, basically, of um, like we can't know anything in of itself, and like the various skepticisms and stuff like that, um, that the whole point that he's making is that more or less, that, yes, we can know things Um, and that, like we, we can look at like this this is a study of material reality, um, to try and know things Um, and of course, like there is no proof. There is no like um, like a guarantee that, in the words of Althusser, that we can assure that material reality will actually reveal to us this true, authentic knowledge. But if it can't, then we're basically living in a fantasy land anyway, of like a Descartes demon or whatever. So this is the game we're playing, so we better play it. Um and uh.
Speaker 2:In more practical concerns, and I am curious to what you think of like, what the actual politics of the book are and your appraisal of it. Um, like this, uh, like the quest this comes back into. Um, like questions of well, if, if we're going to take all this seriously, then we should start thinking um of like, uh, then we should take the concept of um, the soul of the subject, as a serious thing, of what this means, encoded in silicon more or less, and this applies directly to political things. It has ethical consequences. They don't really go into any of themselves, but I think the political aspects will have the most important effects of like, how people, how will AI get incorporated into broader social reproduction? Basically, I think that will be that will determine in many of the like, immaterialist way, like the ethical consequences.
Speaker 1:Well, the politics of the book are a little bit unclear until I see other of your writing. So for recently you wrote something for Default Blog which I will cover on this channel in the Radical Engagement series. In fact it's already slated to go up, probably in July, about the theory of the young boy and what that says semiotically. And the reason why I think this is actually of import is that we are kind of stuck in two political modalities those people who say that culture, symbols and science don't matter at all and everything's some kind of hard material politics. I associate Danny Besner, who's just like culture doesn't matter, period, it's always downstream from economics and military force, which I I don't hold to be true, actually for reasons that even in Marxism we could go into. Or you get stuck in this like what I like to think of as the ret-comp graduate student. For those of you who don't know, rhetoric and composition has a reputation for being the wokest of even the English disciplines, and part of the reason why is they focus on linguistic justice, and by linguistic justice they tend to mean word policing, as if controlling and limiting the signs and functions of a word and trying to cut off its dialogic history or impose a new word that cuts that, that you could just synthetically import that history upon um removes that. You can have a language that changes the way you think and allows you to be less racist, or whatever. This is the theory for all the word policing more or less online, when it's not just about politeness or recognizing someone's humanity. This is the idea like person, first language in this than the other.
Speaker 1:Now, comedians and me have been making fun of this for a long time because I think it has the order of operations backwards. But I also find the people who say that culture has no effect at all and the symbolic stuff has no effect at all and it's all downstream of economics and politics. I'm like, well, what the fuck the economic and politics coming from? Because cultural relations are encoding the way we like think about the relations to each other, which which affects both production and social reproduction, because it be the logic that they would ultimately have to succumb to if they were thinking this all the way out and the way that they parse the world. And I think this materialist semiotics gives us a way to say no, this stuff does matter, but there's a deeper substrate that we have to deal with and it is not determinative. We are not going to change people's social reality by changing the word they use or teaching them a language.
Speaker 1:And you know, I had debates about this when, like when I mentioned, when there was an article about about like, if only people learned indigenous languages, maybe there'd be less war, and I was like what the fuck are you on? One, do you think all indigenous people didn't have war because we got a lot of history to talk about? Two, um, this sapphire wharf stuff in the strong form has been disproven for like 60 years now. There are weak forms of it that are kind of true. For example, you know, if your language didn't have a concept of green, you will associate blue and green together and use them. You know you'll have one word for them. Or you might even like japanese, even modern japanese that does have it. Now I have a word for green, but didn't until probably about 400 years ago or something, and so you have these uses of green that refer to things that are called blue, et cetera, et cetera. So in that sense, yes, but what are we talking about there? We're talking about things that are qualitative and how we name them and what the names do and how we delimit them out. We aren't talking about a whole new fucking experience, whereas if I had no, no, no greater order sign for color at all, it could not differentiate it. That really would be something completely different Like, and so this is the kind of thing I think about a lot when we talk about this, and I do think it implies a kind of it's against what I like. In a deeper sense, it's against what I like to call Graberism. Everyone knows my great feud with the late David Graber. But where is basically? If you think it, you can build it because you thought it.
Speaker 1:And the thinking is the most important part to me. It's very weirdly idealist, um, and he has these symbolic worlds that he assumes are somehow so generous to humanity see his book on kings, which I actually like that book, believe it or not, but there's some problems with it where he's like, well, there's kings, so there's distance to the king and proximity to the king, and like blah, and I'm like, yeah, that's cool, it's all about relations and whatever, but like there's no way you're gonna convince me. That's a universal way of thinking. But that is a way we relate to signs of authority and the second order sign of authority is usually personified in the person of a king or something like that, and so you are on to something. But then again, graver's thought just gets really nutty when it comes here, because on one hand he's acknowledging that there's these kings and he thinks they're like trans-historical human references.
Speaker 1:On the other hand, it's like if we just thought about equality we could have it. And equality is when you do nice things for people and capitalism is when you don't, whereas this way gives us like no, these signs refer to material things and material experiences. And if you want, to say, make a society that is, say, relatively power, egalitarian, you actually do need to look at the inputs and the context and the environment of that society. And if you think about this semiotically, that would make a lot more sense. Like what leads to these kinds of signs? How would they be modeled? What do you build to reinforce them? What can you build to to channel these other signs that are kind of against them?
Speaker 1:And you know, I think that's really important if we're going to be talking about changing society, because the other, the other place I think you get stuck is either what the rationalists do and pretending all this stuff doesn't matter and that also human experience, because there's rational, is also stable throughout history, forever, because it's a very trans historical way of thinking. Or, uh, you get into a hotel, grand abyss of the frank of, like the middle of frankfurt school and later dorno, where you, you think, everything is emergent as a system of control and you literally can't see any way out of that, like there is no way out, there's no way to design a world that doesn't have these thought limitations on it, and so, basically, you have the eternal form of fascism always with you and anytime you try to do any kind of identity and identification, thinking you end up becoming a fascist somehow and like and you know, I'm not saying that's a very vulgar way of expressing adorno, I know, but it's not far from what he ends up at, you know, by the 1960s. So this is a kind of different way of thinking about this, where we take these inputs seriously, because they also have material constraints. These inputs are going to matter for how we program machines, how we deal with parasocial relationships, how we navigate these things. These semiotic references really matter, and one of the reasons why they really matter is parasol.
Speaker 1:I came to this realization literally yesterday. Um, I was thinking about why is narrative so important for explaining things in a way, other things about language aren't. Because narratives like oh shit, narrative is a simulation of embodied relations. It it's simulating, it's not real, but these signs approximate it to us in such a way that we interpret them as if we are learning about actual relations. So that is why narratives work, because they are a proxy for social. They're analogous to social experience. They're not the same as social experience, but they're analogous to it.
Speaker 1:And that brings us back to your point about the bat. No, we can't know exactly what it means to be a bat, but we can pump pretty damn close like and that's why narrative is important for people, because it's it is modeling sociality in a way that they can recognize and attach to analogously and understand in a more material way than they often can. Other symbolic reference, um. So, yeah, I thought your book really got me thinking about that a lot. Uh, I was reading, I was reading, I was actually reading this to prepare for this and sam delaney last night, uh, I was reading dahlgren and it's all of a sudden like a bunch of stuff came together.
Speaker 1:It was kind of like a eureka moment. It was like oh shit, this is why this works. Like because I have been wondering, like, why is narrative so different than other language? Why is it so effective as a teaching? It's a proxy for sociality and it does so through signs that we recognize as proxies for sociality, like that's why trop matter, that's why all these character forms matter, you know, blah, blah, blah, blah, blah, and it's really. I think it's really important and I was kind of thinking about this. The other one of the things that I thought about is like the linguistic turn in philosophy and theory ended up being a disaster. But it was an important disaster because we did need to deal with language as opposed to like essences and necessary. What is it? I mean, you know the amount of time we spend debating what's necessary and what's contingent.
Speaker 2:No, I think you're 100 correct, um about the, about narratives, um, because you got to think like the, the thing about experiencing reality, of experiencing society and all that stuff. It's not inherently intelligible to you. Um like to create, like um in a lot of ways, like the patterns that we use to explain. These things have to be learned over time, they have to be invented and created, and to do that is like a symbolic exercise, it's a semiotic exercise and like any narrative that you can. This is, this is one of the things I realized and this this also relates to what I was talking about about with chomsky um is that any kind of logic you can think of in terms of like a specific operation of symbols, in terms of like a specific pattern or whatever, that can be applied arbitrarily, basically, to whatever symbols you want and it can produce more or less coherent and like real results and like. I have an essay in there on like um schizophrenia and uh, as it relates to like semiotics.
Speaker 2:Um, and you can like take a simple set of rules and you can use it to extrapolate symbols out and you don't, even if without any other further experience. Um like, if you extrapolate them being from beyond what you experience. You'll have no inherent, inherent idea whether those like extrapolations are valid in some sense or not, if they relate to reality or not. And I give the example of, if you like, kept adding like numbers together but you didn't know the rule that all integers added together will be another integer that actually exists. If you didn't know that all numbers are real and actually valid in that way, if you kept adding things, maybe you would stumble across numbers that aren't valid in some way.
Speaker 2:And this is the same thing with narratives, because people learn all sorts of narratives. And it's the same thing with ideologies. And they don't, like there's no inherent way to know which ones are actually like representative of, like reality or like, without further investigation, necessarily, um and from a mimetic, like perspective, um, like the ones that maybe fit better into your existing ideology or like, like we were talking about earlier, that, like the way that knowledge is like influenced by desire and trauma and all those things, um, like there is mimetic selection going on there, um, that can affect which there, sorry, which narratives people take um and that that affects, like, the political realities we deal with.
Speaker 1:Well, and that might explain certain things that are harder for Marxist to explain. So I'll give you, like the classic Marxist is, your class positionality is going to explain a lot of your relations and a lot of your ideology, which is true enough. But there's also this thing that Marx is smart enough to know that it doesn't actually explain individuals ideology perfectly, and he does actually readily admit that it's really kind of only true, um, in two cases. In both, there's an aggregate ideology that we can kind of come up with and and this is not the word he uses, what I use to explain it to people though and there's like a collective ideology or like, to use a lacanian phrase, an imaginary uh. Or there's an institutional collective which actually has agency kind of, because it has systems and rules that act as an agent that people can come together and act as an agent. That's part of what cybernetics is about, um, but these are kind of different things so like, and none of them will necessarily tell you, uh, what an individual worker, a bourgeois or petty bourgeois is gonna think entirely, because you actually can't go back down, because the individual social world is informed by idiosyncratic events which marx clearly knows about, which is also why he doesn't like, say that, like, oh, the worker's ideology is going to is informed by idiosyncratic events which Marx clearly knows about, which is also why he doesn't say that the workers' ideology is going to inherently be good, because there's not a one-to-one emergence like that. But I always criticize Marx actually in this for having an insufficient theory of mind but a good theory of, like, social aggregates and social minds. But if you use semiotics instead of, I don't know, freudian psychoanalysis seems to be the one that we're stuck on. But there's all kinds of things where people supplement Marx, these different theories of mind, and you're kind of at war with one of them which is implied in some of Marx's works. I'm actually going to say I think as a reading of Marx, particularly early Marx, this is the correct reading of what he thought. I just don't happen to think it's true Is that there's an emergent correspondence between like, say, the reason why we call a pencil a pencil and think of it as a pencil is that it's based on what it does.
Speaker 1:This is the EP Thompson theory of language. Yeah, but that treats human beings as like an emergent blank slate, basically Like, okay, so we have materialist meanings, we, we think of a book and the words are arbitrary for ep thompson but we think of a book as a book, because books do book things. We, we, we figure out that they do book things and then that encodes the idea and then it's materially existent and it becomes, and then we, then we create things according to the idea of a book. So, and then you go through that hegelian process of like, you have the emergent quality of reality that you then abstractify and then you create things that follow that emergent quality of reality, etc. I think the second half of that's true. I actually do think okay. So we have these feedback loops.
Speaker 1:Once we, once we figure out something, we start creating things according to that rule and that actually reifies our conceptions and gives us a material thing that wasn't there before. I keep on holding up the book for those of you who are listening um, uh, a material thing that wasn't there before to make concrete the abstraction that we've deduced from material reality. But I agree with you that that's unsatisfactory in a lot of ways because there's structures in the mind and the cognitivists aren't wrong about that. So in this weird way, ep thompson has the same limitations as the early um cybernists do and that they assume that the human mind is blank until until it experiences thing empirically, not empirically, empirically or empirically, and that is in a feedback loop that causes emergent concepts et cetera. And like I think we have structures that just kind of exist in our brain that have emerged.
Speaker 1:I don't even think I, I would even venture, I can't prove this. I would say that they're probably emerged in most self-reflective animals to some degree. Um, that the moment you start recognizing self-reflectivity, this is still kind of hegelian. But you start recognizing self-reflectivity, this is still kind of Hegelian. But you start needing structures to fit that in, and those structures already kind of exist in the human mind, and the way that we can encode things from sensory inputs as signs, because otherwise I don't know why. Like you're right, why is a tree meaningfully a tree?
Speaker 2:know why, like you're right, why is a tree meaningfully a tree? Like some of it is encoded like in the mind, uh, like biologically or whatever. Some of it is encoded socially, yeah, so socially, it's like we've developed these structures as um basically cultural tools, um, uh, because like this is what this is why we have everything that we have today. It's born from this iterative development of abstractions that has given us things like books and computers and the sciences and whatever. If you couldn't do this from scratch, basically in a very short amount of time, you can't um this. This is also one of the problems with like just thinking that if you have like some sort of self-improving ai, that it can suddenly do magical things or whatever, like the rationalists seem to imply. Um, like the, the ability to create these different um like concepts for things is a long like period of like a long history of applying different um rules to different concepts and ideas and whatever to create new ideas and um this like didn't have there.
Speaker 1:It can't emerge just from like pure sensuous experience of a thing, um, and it's a weirdly asocial, like what my critique, or chomsky is, like his. His notion of universal grammar is both is trying to explain sociality but also is weirdly a social. Um, like you know it's a. He has kind of a genetic theory of it, just kind of emerging, and one human ape like it, what you like, um, and it's actually strange because the rest of his philosophy and ideologies don't don't flow that way at all. Um, whereas, uh, um, the emergent theory, uh it for for marxists who are materialist and for, and one of the things that makes marxist materialism different than, say, normal bourgeois materialism is that we think society is support, is kind of a material input which I think breaks people's brain a little bit, um, and we think structures are part of material reality.
Speaker 1:Um, and I think even class classical hegelian mar thinks that. I don't think that's, you know, that comes from his study of Epicurus and all that stuff. But I do think there's this way in which the classical Marxist theory of like how a pencil is a pencil is still weirdly asocial, like the social feedback is cut off at there somehow, because even though it's still implied, because you know, in hegelian uh, phenomenology it's the recognition that others exist is why you'd even know why yourself is a thing. Um, and I think there, I actually do think there's a fundamental truth to that, like I know that I am different from other things because I've encountered a thing that has agency that is not me. Therefore, I know that I am not everything in the universe, unless I become a solipsist. So there you go.
Speaker 2:You know, I mean this is why, like Althusser, like, focuses on the German ideology as kind of a turning point, but a point where Marx is still within that kind of sensuous empiricist humanism, because what he's doing, on a purely functional thousand-foot view, is critiquing Feuerbach and so on with these economic concepts to say that these are more real, but at the same time, the way he justified this, he justifies this in the text is that these categories are more sensuously real that labor is because of its sensuous reality is, is this point of critique to them, and he, the band, he doesn't say that anymore after, like when he gets into capital and so on.
Speaker 2:He's just using those economic categories, um, but that, like that justification, um, which is really taken, I think, taken seriously by a lot of people, um, or is just like kind of a more um in air basic ideology that people have. Because, like from an, from a phenomenological experience of our experience of categories, it does kind of feel that way that when we see a book, we just experience all our associations with the book, like we don't think about, like the, like, the semiotics of it and how, what it took to get those associations. We just kind of get them. So I think that there's a natural attraction to that point of view through that.
Speaker 1:Particularly if you're not a substance dualist, which is the other general explanation that feels natural to everybody. I have a voice in my head. It emerged probably sometime between five and eight years old. Even if I don't hear voices, I probably at least think in pictures are vice, for, like, most people do both, but some people do one or the other, you know there's all kinds of things going on there.
Speaker 2:I wonder if there's anyone who thinks and smells, that would be interesting.
Speaker 1:I mean, I'm sure dogs probably do, but you know. But those are the two Like, if I'm just thinking about my experience in the world. Those are the two things that like it makes sense when you come to that conclusion just based off your phenomenological experience yeah but there's all kinds of problems with both.
Speaker 1:Like, I mean, substance dualism seems obvious because, like, what's what's the thing in you that's separate from you, that's experiencing any of this? Um, and you're like a soul. Well, what the fuck is that God? Well, what's that, the thing of which is the ground of being but is not? Yeah, uh-huh, sounds like you're now inventing categories that I can't even begin to comprehend, or the, you know, the natural correspondence theory, basically, is like well, we experienced, you know, the ability to communicate, so we did that and we reified that more and and. But there's no semiotic building up to how we could encode that in any way. It's just like this is a thing. We then name it. Once we named it, we come up with rules for it, which reinforces the thing, and then the abstract becomes real. And, like I said, I think the second half of that equation is kind of true, but the first half is asocial, like we, I, we don't can. Uh, and this brings me back to my point about words. When people go, words have meaning, and I always go.
Speaker 2:If we agree on what they mean, they do no, I think that it's important, like there is also like a a sense that we do phenomenally experience the semiotic division of units of meaning. It's that classic thing I think from St Augustine or whatever, of if you point to something and just say what it is, you won't necessarily get like what the word actually means, because you have to figure out what part of that is the thing that's being spoken about. And the way that you do that is through repetition. And this is also a thing with machine which is in common with all those different examples. And when you learn something for the first time, you kind of get a window into that semiotic process of eliminating what part is actually being spoken of and adding parts and forming the thing that you're actually supposed to be talking about when you use a word. Um, so the process, like the phenomenal experience of semiotics, is the process of learning.
Speaker 1:Uh, in my opinion, well, I mean it, and I think the, the process of learning, as you imply, is both pointing to but also bracketing out. Yeah, um, and that's super important. So it's like, okay, well, uh, you know it's not, which makes the like the, the Aristotelian way of thinking about it as, like, necessities and contingencies are essences and, uh, what's the other thing? Essences and forms, excuse me, uh, that that isn't as helpful. You know, now you can see how an ancient person is thinking through rationally and you come to these conclusions, right, like, um, that there's an essence and there's a form, that the essence is what is unique to you and the form, you know, and that's true for everything.
Speaker 1:So the teleology of the thing is based off of the trait that is unique to it. That's why it is blah, blah, blah, because it's a uniqueness. It, you know, makes it all fit in the universe. There's also like theological and all kinds of other assumptions in that. But, um, and if you're designing shit, that also seems natural, because when I design technology, technology does initially have a teleology. We design this stupid coke bottle to hope coke it might have new teleologies with use or degradation or clues or any other number of things. Right, but but that's not how the universe probably worked, unless we assume there's like a mastermind God, but then that creates all the other problems that we just talked about before.
Speaker 1:And I also want to point out that people have realized there are problems with this just logically, for longer than most of the religions that exist today have been around. You know, there's the Epicurus problem, there's Buddhist problems about this. You know, like the Buddhist, one of the Buddhist things that I really take to heart is dependent co-arising. It's basically like yeah, once a concept comes into existence and has a semiotic form, it also implies a bunch of other concepts that coexist the moment you think it, that both oppose it, which is, you know, we could think of it in terms of stupid nerd shit. Uh, the mario creates wario problem that the moment you have defined mario enough for mario to be coherent you have also created the idea of a Wario, the anti-Mario that breaks all the rules that you defined as Mario.
Speaker 2:I mean, this is also like, I guess, kind of gets to that one essay I had of like why the universe isn't math, which was created in like. Basically, it started as a back and forth in the comments of your Patreon that got turned into. Someone wrote a blog post that I replied. It started as a back and forth in the comments of your Patreon that got turned into. Someone wrote a blog post that I replied to.
Speaker 1:Oh yeah, a person that I no longer have any contact with. But yeah, it's me versus the plaintiveness, but also the subjectivist, because people think I'm like a math subjectivist. I'm like no, but the universe is not math like math can describe things. It's a symbol that is coherent, that we can use once we bracket out qualia and throw that out the window. Um, but I don't actually think it encodes, that the world is encoded in numbers, and people like, well, but I'm not a platonist, except for a math. I'm like that doesn't make any sense. Why is there? Why are you making a metaphysical claim about the existence of abstractions, but only for numbers?
Speaker 2:yeah, I mean like nobody like really stops and thinks about like the, the cybernetic aspect of it here is that like what, what? Like there is no math that is not operating in this like semiotic way of these symbols that stand in for things that have to have like these things that they're not in order to have meaning. What have you? What are we even talking about? When we're talking about, like the totality of the universe being math? It doesn't correspond to any kind of like actual human social experience of math. Like it's inventing some sort of like, it is creating this whole like idea of totality that they're connecting back to math, but for no reason than, I think, cultural reasons. There. There is no particular scientific reason why you would make that connection.
Speaker 1:But, nico, this is why I don't trust engineers and mathematicians. People are like Barn, what, I thought you were. A science guy, I do. I love engineers and mathematicians, but I don't trust them, um, because they start believing their shit is metaphysically real, uh, but but anyway, I do think the semiotic thinking, though, gets you out of this whole like, well, you're either a platonist or you're a nominalist, or you're either a, um, you're either like a Forbachian materialist or you're a any number of kinds of idealist. The other thing about idealist is when we call people that I'm like, yeah, it's often true, but also, there's about a bajillion kinds of idealism so like the funniest thing about idealism is the type that are like directly spiritualist, are often like more realist, uh, than the ones that aren't yeah, I know that's the funny thing, because they're actually thinking about certain stuff in ways that actually are weirdly material sometimes.
Speaker 1:Um, but I like like one of the things I tell people is like you really actually should study buddhist and early christian um epistemology and stuff, because it actually does raise questions. You probably need to think about that. Descartes just tried to like blast away and he made us forget about them, but they haven't all gone away. Um, like you know, when I talk about nominalism and but people think that I'm like arguing that there's nothing that's like essentially real or no structures to reality, I'm like no, I just think you're getting too caught up in like word equals, like sick, that sign equals somehow inherently signified um, which it only does because you've trained yourself to make it do so um, this is the other thing that I talk about a lot in the book in the sections on semiotics that, like any system of signs, is materially encoded, but that also means that its limits are material.
Speaker 2:Like a thermometer only works up to the temperature until it melts. Like that system of signs on the thermometer isn't a metaphysical thing, it's a physical thing.
Speaker 1:A scale only works until it can't measure the weight upon it.
Speaker 2:This also has implications for AI and epistemology in general. Every sign is materially encoded somewhere, something, and every system of signs has to have some sort of way of disciplining itself so that one thing always means this thing that's associated with it. And when you have a breakdown in that you get problems, which is also one of the problems with the rationalist theories of of how ai will work. Because you can't just encode, like all of physics into something at some point you and then get like this god being that can do whatever it wants, because every physical thing has to be mapped on like within its like representation of reality has to somehow be mapped onto some real thing. There has to be some control system making that happen.
Speaker 1:That makes sense, which does complicate a bunch of assumptions about what AI can and can't do.
Speaker 1:It also is interesting to bring the I word back because we have not talked a lot about it, but the way ideology is a functional ideology often, if it doesn't denotatively encode signs, it definitely connotatively encode signs.
Speaker 1:And one thing I've always wondered about with ais and ai learning is that, since word choices and signs actually do imply norms, even though we can't derive norms from descriptions, but descriptions are always described in normative language there's no way not to for the reasons that we've talked to involving signs, cause I'm saying this sign applies to this thing, so we can comprehend it. I've made a normative choice to do that. It has always made me wonder, like are AI is going to develop weird ideologies that are like adjacent to our ideologies, that come about from aggregation, but since there's no like individual life, trauma, brain, you know, uh, limit there that those ideologies may be somehow different when you start dealing with them, emerging in aggregate. What I mean? Like what? What do you think about that? Because I think a lot, lot about that, like I'm like what if this thing's a capitalist in a way that I've never imagined?
Speaker 2:Like.
Speaker 2:I mean, I think that this I think I do talk about a little bit in the very first essay that AI is, at least as it's developing now out of LLMs, is like hyper intelligible and that it is like more intelligible than all of that messiness associated with um.
Speaker 2:Humanity and the ideologies that it will produce in that way will probably be a little bit more cliche than actually existing ideological ideologies, um as they're expressed by people. Um, and that can be uh, perhaps that can make them more inflexible, um, because like there is, no, there aren't like these necessary counterbalances of like lived experience. Um, there was a, a funny uh study that was um done recently that has raised red flags because of the ethics involved, but was like. These academics created AI agents to debate people on Reddit about various issues, and the AIs were more effective at convincing people than other like actual humans and also adjusted their like like the, the um, like the uh, um, what do you call it? Like the, the characteristics, the individual characteristics of the persona and their history to make them more convincing um in arguments.
Speaker 1:So they're good at code switching basically.
Speaker 2:Yeah, I mean they would try to find who would be the most convincing person for this message to come from and would take up that role. And so they can. So that placement in the semiotic field by ideological interpolation is much more thorough and exacting when it comes to AI.
Speaker 1:What's interesting is that I think, if you've ever seen ideologies on Twitter, I think this is already implied in aggregation anyway, because ideologies on Twitter are simple, they're kind of dumb, but they're very persuasive and they're very interpolatable. Like you can end up in them by accident or by pretending, or or this and the other, and so it makes sense to me that ai would be even more so, because it's programmed to like, it's programmed to answer your needs in some ways, and so, as it argues with you, it's going to be doing ideal types which are going to be convincing, even if it's less nuanced, maybe a little dumber, um, but it's actually dumber in so much that it's been produced to be convincing.
Speaker 2:It's actually dumber in a smarter way, which is, you know, Well, part of this is at least somewhat contingent on the existing like trends in the industry of how they do reinforcement learning to make them more like sycophantic and like responding to people's individual psychology.
Speaker 1:They are pretty sycophantic so far, like that's one of the things that I don't like about when, when I know when a student writes me a letter for an AI with AI, because it, like it, treats me like a Kim Jong-un.
Speaker 2:That's not how they like. That's not how the base models would respond and there has actually been some evidence of recently. That's not how the base models would respond and there has actually been some evidence recently, some hard evidence by researchers, that training them in this way actually cuts off certain capabilities that exist in the base model. The base models are better at producing random numbers, for example, playing rock paper scissors, and they have more creativity because they are allowed to produce more low probability responses, whereas the reinforcement learning is trained to produce these more common popular responses based off of popular prompts. And what have you?
Speaker 1:This is something I'm fascinated with with the ai is like stuff that I don't think is obviously related starts showing up, at least in aggregate, maybe not in human minds, but in this computer learning models, um, as related. Like, if you get good at certain kinds of language, you start sucking at math no matter what, and that actually isn't inherently obvious, because syntax and and mathematical skill are related in the human brain, like they're not. They're not one-to-one the same thing, but they're not not. Uh, and I, I read recently a math article that was trying to argue that like math pulls from non-linguistic parts of the brain, and I'm like but what are you counting as linguistic parts of the brain? Because language pulls from like and the kind of language that you use, whether or not it's spoken or written, pulls from massively different parts of the brain too. There's no left brain, white brain, nonsense, like. It's not that simple at all.
Speaker 2:Now I do think that they're going to solve some of those problems because of, um, like, there's this thing, that this. They've been implementing this for a while. It's one of the reasons why the math has gotten better. Um is that the mixture of experts type of models, where they'll have one thing that focuses in on math, one thing on like politics or one thing on like whatever um, so that it like it is like more specialized basically parallel processing in the same a gentle ai.
Speaker 1:Yeah, that's, that's interesting. That is different from people. Uh, I the bring back to point you bring up earlier one of the one of the limits I have to ai and one of the reasons I complain about it is not a lot of stuff that I hear leftists complain about it, although there are people I like at Citron a lot. But I do sometimes think he conflates his criticism of the AI industry with AI at Citron. You know better offline guy I like him a lot, but I think sometimes he critiques the ai industry with the possibility of the technology and I'm like those are separate problems.
Speaker 1:My man like that's a very common uh conflation yeah, yeah, it's kind of like when people conflate the problems of genetically modified suit with what monsanto does and I'm like but those are different things. Like they really are different things. Like if we didn't live in a capitalist society, would genetically modifying food be that bad? Because we've done it for millions of years, peeps, we just did it slowly. But it's the same with the AI stuff, because I hear criticisms of the AI business model, but one of the things you did make that I think a lot about and I don't think we can know this yet but is there an energy barrier to doing this? Because one of the things I think about about human agents in this is if you actually think about the processing we can do and, yes, we do consume a lot of energy to do it. I mean, our brain eats a ton of our calories, but although, guys, you can't think you're way skinny, sorry.
Speaker 2:Don't, I know it.
Speaker 1:But the brain does eat a disproportionate number of your calories. But when I think about human like processing capacity and the energy that we use to do it, we're actually fairly efficient machines and the energy that we use to do it, we're actually fairly efficient machines.
Speaker 2:Certain processes in the body are close to the entropic limit of what's possible.
Speaker 1:Right. Do you wonder if there will be energy limits before we actually see what these things are truly capable of and like abstract? This is because I do. I do worry about the energy use, particularly about misuse of ai, because I'm like we're burning down forests and make studio ghibli memes.
Speaker 2:This is a stupid use of this technology I mean, like I've said before, they are like incentivized to make things more energy efficient with this um, as opposed to things like Bitcoin, where that's the opposite. Even with the thing about Google including in search results stuff, they've started to get smarter about it by caching common search queries, so they don't have to actually get the ai to do it again um, when it comes up again um and uh, like, but with, in particular, like, more abstract of like what's possible with um ai. In general, I think that humans will always be more energy efficient, but the fact is that we can just produce a lot more power with electricity um than is required to like run the human um and I. I think that, as like I, I don't, I I'm skeptical in the book that will necessarily produce superhuman AI necessarily.
Speaker 1:Because I do think that implies unlimited power. Unlimited power inputs.
Speaker 2:Yeah, also, the rationalist point of view was always like oh, it'll be self-improving and the efficiency gains will be exponential or whatever. It'll basically be like magic. They actually used that word at one point, but at the same time, my personal prediction maybe this is a bit glib is that any extent it speeds up research and development will actually exactly match the increasing difficulty of research and development. So everything will just continue linearly. Um, uh, but uh, I I think that, especially when it comes like, it will probably, I think could achieve something that is similar to human levels of intelligence and a more conventional sense. Um, I think that's within the realm of possibilities.
Speaker 2:Um, in term, like part of that isn't necessarily about energy, um, although there are like going to be energy constraints, uh, and I'll probably, and that will influence like the adoption of like um andids and stuff like that. How, what is the efficiency versus humans? But like, as I was talking about before, like this development of like new ideas and so like is a part of like a very long process that is like in part like cultural process. That is like in part like cultural, um, and like it is, it's the sort of thing that's like subject to the halting problem, um, that we're never gonna know. Like we're not, we're not.
Speaker 2:There's not going to be a systemic way to keep producing like arbitrarily better uh systems of like, uh learning and and that sort of thing. Um, we definitely can't do it in people. Yeah, um, like it's gonna hit some limits. I don't know exactly when that limit will be exactly um, but I I think we still have a ways to go on current development. I think that, um, people will figure out how to make the most of like language models without the reinforcement learning that there's still like alpha in that um, but I think that like we'll, we'll, we're still a ways from the end of this of until we hit like the the s curve, um you know yeah, I think about this in some ways.
Speaker 1:When I think about, uh, um, like, because a lot of people predicted we'd hit the s curve soon, part of me is like, well, maybe we're training these poorly. Or like, maybe we've actually hit a capacity on the current model which they're developing, but other people are developing ways around this and we're just relying too much on chat gpt. And the reason why I think about that is deep seek, because I was like, okay, yeah, they probably did, like black market, impart some chips, but they did it with less than we did and it's almost as efficient. And yeah, they probably stole code from chat gpt, but they got it to operate on a more energy efficient basis, which does indicate that it's possible.
Speaker 2:So there's like a couple of research directions I think are promising and some of them I think that potentially there'll be a shift away from transformer models towards language diffusion models or similar things or any like. The problem with LLMs right now this has been observed by a lot of people is that the, the pattern recognition, is still pretty ad hoc. It doesn't create elegant internal simplified models and there are like systems that do that that I think have kind of been under-researched and haven't been optimized to be able to do the same things yet and that like, whether it's diffusion models or CANS models, which are more like which kind of like, are inverting like the neural net type of thing. They have, like neural nets, use like one activation function. These allow all sorts of functions to be composed into larger functions and I think that those could probably produce more coherent like thoughts and patterns.
Speaker 1:And there's some development there, Because it does seem like hallucinations are increasing in the current models.
Speaker 2:I don't know if they're increasing, although the Seek Offensive probably makes it so that you can get it to be more arbitrarily interpolated, which includes things that would basically be hallucinations, but I don't think that like if. If they had a model that would just transparently create less, worse results, I don't think they would release that, but, like the, I think that there's going to be lots of developments, probably in ways that people aren't necessarily expecting right now. What they're not. What's not happening, though, is that there's a secret super AI, super intelligence project at like open AI or anthropic, or something that's going to change the world in a year that, like rumors, like to speculate about.
Speaker 1:Yeah, scott Alexander did this document that literally reads like fan fiction, like, uh, he's part of that document that was just like so you have super computers like replacing all human interactions by next year, but it's gonna make well. And it's also in that weird rationalist. They're both super afraid of this thing, but they also think you can do everything. So it's like you know and I'm just like so basically you're imagining computer god, which you know. Cool, maybe you just should have learned to write sci-fi, like um this is often my response to the rationalist is because I'm like.
Speaker 2:I'm like, I feel like you're projecting now, definitely, I mean my whole problem with the rationalists is that, well, one is this whole thing that they started with.
Speaker 1:fan fiction is where the community started, but related to that is that they don't read outside of their like, uh, big canonical sources very much um, and their big canonical sources are kind of shitty but go ahead.
Speaker 2:Yeah, I mean that's because they're utilitarianism and all that shit, um, but it's like that, like they don't interact with these other modes of thought. Uh, because they think that they've got access to, like, the one true way of thinking, which is, I think, poorly mistaken, but, like, because they're always extrapolating from that like like system of thought. They're always producing, like these absurd results that they believe sincerely in. Um and like and I talked about this on the Cosmonaut podcast too is that they um, like that they are just like.
Speaker 2:It's a failure of the American teaching STEM, people, philosophy or whatever, but also on the part of the humanities of not learning how to properly understand these objects that became the objects of interest for the rationalists, but it does seem like, in particular, their particular ideologies breeds this certain paranoia that has created a lot of cults, like the Zizians is a recent one that is more absurd, and they have the same thing of like. They dial up that like fan fiction sort of thing up to 11, because all of their like citations on their writings and blogs are like Steven Universe episodes and stuff like that Um, which it's. It's like in this pop culture sort of um, the shallowness of of thought is something that permeates Um. Like it, it also happens, uh, in the humanities it happens less cause they there's a science reading Um, but, like you see this in a lot of places oh yeah, there's degenerative paranoia, there's rationalist.
Speaker 1:I mean, I do think there's a reason why you keep on seeing, like new atheists, who are also adjacent to this rationalist movement, use a lot of the same definitions of rationality, becoming almost like biased religious people, uh, and everything they did do, because they've now assumed that there's some kind of universal justification for it, um, and I mean, you know, I hate them I don't have any.
Speaker 1:I don't have any qualms saying that but it does piss me off as a person who's like but reason is good though, like I remember. I remember when Chapo like launched it's you know book and it was like blah, blah, blah, and I'm like no, I mean, I know that that's a joke, but I wouldn't even joke about that like like like logics and facts are actually fucking important guys. Like we're supposed to be materialists, not just moralists.
Speaker 2:Honestly, people do get negatively polarized on that stuff, but like this is on, on, like, I got to give them, the rationalist, some credit, right, because when they approach a problem like an intellectual problem, they take it very seriously, absolutely.
Speaker 2:And the problem is that the problems they chose were impossible, ones that were like self-defeating, but they, they really tried very hard intellectually at them and I give I got to give like Elazer Yudkowsky some credit there of like formulating this really elaborate epistemology and theory of mind and everything which allowed me to like in the critique of that, which was a lot harder than like all the post-rationalists, all the neo-reactionary people like I can, I figure out how those people are frauds very quickly and explain, like what the hell is going on there, um, to pinpoint exactly where the rationalists were going wrong took a lot of like racking my brain, took a lot of like self, like like reflection and research, because the problem was that their argument rested on this idea that the ideal sort of mind would produce these certain like would be rational in this specific way that they're talking about and which is like related to this coherency and the coherency of like, um, of like things in the real world, which seems like an intuitive like thing to start from right, um, that there's a coherency of things that you want in the real world and that a rational being would approach things from that perspective.
Speaker 2:Um, and I had to say what, like why a ideal sort of intelligence would necessarily not behave like that, which is a very hard thing to prove, and which eventually led me to that whole like theory of higher order science, and that having goals and higher order science and having them shaped in that way is a necessary aspect of intelligence. And that was like something that was really hard to come up with, and so I give them credit for that at least.
Speaker 1:Well, I mean, that's a good thing. I also find that sometimes it's really easy to go after the humanist Cause. I'm like well, that's just vitalism, you can't prove any of it. Or you know like. Or like that's just uh. Another one like that's so subjectivist that I don't understand how you think language communicates anything. Or or uh, or, I have to believe in god for this to work.
Speaker 1:Hagel um, sorry um you know, I know there are people like Dominica who swear to you Hegel is a secret atheist, but I have yet to actually find any smoking gun.
Speaker 1:It's funny because everyone's like the most famous thing is like, is your Hegelian?
Speaker 1:I'm like, no, I just think Marx is a Hegelian and I do think understanding German idealism is super important to thought, but like I actually don't think a lot of german idealist metaphysics make any good sense, um, so it is. I think this is something for people to to deal with and I think it's also interesting because, unlike the rationalist, in some ways it does take human, like prior human philosophies in development, like aristotle or nargajuna and all these other historical, as being meaningful to development of these things. Not necessarily in a linear way, but like we are kind of philosophy is like a fail sign, it fails upwards, um, and so like we are kind of doing that there is stuff being learned and I do think I do think the lack of philosophy knowledge of some of these people's, you know, uh, really is a limitation for them, and then they like discover it, like I'm gonna master logic, and I'm like, uh-oh, what's gonna happen when I explain modal logic to you, like it's just, you know um this isn't gonna go well, but um but it's, it's funny.
Speaker 2:You say that because I was just reading, like, this alfacere book and he has this really funny line about how philosophers, um like, are kind of a joke, um, because, um like, they they've like there's that old thing about how the philosopher is looking up and thinking about big thoughts and that's why they fall into a well when they're walking. And Althusser goes, the philosopher is also coming up with a whole theory of how he's going to fall into the well, but it's so that he won't fall in, but he still falls in. But he falls in once in real life and once in theory, and so that's why it's a joke, the comedic timing of it or whatever. But there is a reason why Alazer like calls himself a philosopher, even though he's, like has this whole idea of non-philosophy and stuff um like that. But because, at the same time, that like philosophy is genuinely used in this like idealist way, it like is a ruling class ideology, more or less um or like frivolous, in a way that kind of serves a certain reproduction role for society.
Speaker 2:Um, there is a way that, like this more abstract thinking is still important um to understanding things. Um, and you really have to exercise your mind in that abstract way to grasp a lot of very important things. Like for Marx, this was the process of understanding what a commodity was and how it works and what the processes of capitalism were. And it's like not many people undertake that sort of project. One because it's difficult to not many people undertake that sort of project.
Speaker 2:One because it's difficult um to do like that sort of materialist analysis that is also, it is still like generative of abstractions, um, because a lot of people will do materialist analysis and look at purely reductive way of trying to avoid creating more abstractions Um, but the other way they'll just, they prefer to do the idealism thing which is in many ways more satisfying that it produces, like in this, like reliance on cliches. It's that you get the result that you want um on paper, um, but that's also why that you fall into the well. So the goal of philosophy is to create a philosopher who doesn't fall into the well. So the goal of philosophy is to create a philosopher who doesn't fall into the well yeah, I, I'm doing that.
Speaker 1:I, I've been thinking about this in terms of ai, though, because ai really is, I mean, it's, it's a different problem from a lot of others. Uh, I think a lot of people are mad at it because it doesn't do what they want them to do. But then again I'm like but you don't understand how people think anyway. So, like uh and I I mean that both in the snarky sense, but in the non-snarky sense as well like, like, your theory of mind is not particularly robust, so you're asking something to do, something that I don't know that a theory of mind could do. Like, um, and I don't think lms are anything like conscious yet um, but uh, it, it, and there are. There are things about, like the tech that I am not happy about the fact that it wasn't initially encoded with like kinds of energy constraints and efficiency limits. They just assumed it would teach itself, which is that's a pretty big fucking assumption, and you know there are people responding to that.
Speaker 1:The industry is really shitty. We use this for stuff. It's not optimal for, you know, it creates a lot of widgets. But when I listened to people say, oh, it doesn't do much, I'm like it offloads, creates a lot of widgets. But when I listen to people say, oh, it doesn't do much, I'm like it offloads a whole lot of tasks actually that are worth it to offload. Like I can actually work much more efficiency, efficient with it.
Speaker 1:But the use cases that most people use it for is actually not stuff that is particularly good at. Like I'm going to just throw a prompt into it and ask it to write an essay off of a prompt that has contextual stuff to me in it. But I'm just going to assume that the AI is going to know that. And I'm like why would you assume that you know? Like, or you know it's going to generate words to the essay statistically and they're going to be grammatically perfect but perfectly vapid. Like I get a lot of that.
Speaker 1:But then when I use AI to like coach students on writing, sometimes it actually does a decent job. So I'm like okay, we are misusing this product. The business is selling us this product wrong for both philosophical and for capitalist interest. They're incorporating it in the things that don't need it, um, and also at this point, they're calling things that aren't loms ai, because you can pretty much call anything an ai. Um, I mean, I mean like a calculator, could be called an AI if you really want to track it. But I have just, you know, and there's a lot of misunderstanding of it. For example, I got an argument with a technical writing professor recently at my day job because he said AI is the calculator for writing and I literally laughed.
Speaker 1:And I was like if you believe that you understand neither writing nor AI. The professor actually does kind of understand writing, but he does not understand AI, and so it's just. There's a lot of, there's a lot of misunderstandings here, and I am also worried about its misuse affecting human development, because I think we see evidence of that. But you know, every fucking technology right now seems to be misused in ways that negatively affect human development that, if you think about it for five minutes, aren't necessarily inherent to the technology, like there was no reason why social media has to suck as bad as it does. Yes, we designed it that way and I remember when it didn't, you know, because it wasn't initially trying to drive negative dopamine hits by angering you all the time to keep you looking at ads. That wasn't its initial function and when that function, when that commodified function, was put in there, the easiest way to get people addicted is to get them pissed off all the time.
Speaker 2:What's funny is that Blue Sky was not designed that way, but still does it just because it's full of liberals.
Speaker 1:Right, who have trained themselves, but this is actually a kind of semiotic and cultural point. They train themselves by using Twitter in a certain way to think that is all you use Even things that aren't run by that algorithm for. Because I like blue sky. Actually, when I ignore what people use it for and go on like the sub part because it's easier to search things out on it, I can find whole discourse patterns that I can't find on on x, but if I just look at my feed, it's like well, well, this is just like 2015 Twitter and everyone thinks that's good and I'm like no.
Speaker 2:Well, it's an improvement over what Twitter is right now. It'd be true, which is sad.
Speaker 1:I mean Twitter the other day was flashing up I'm not saying Nazi propaganda is just right wing, actual Nazi propaganda and I went to click on it. I'm like who the hell is this major? And the guy had like 500 followers and just bought a blue checkmark. And I'm like why in the fuck is this showing this to me?
Speaker 2:And now you get ads that are just hardcore pornography, right.
Speaker 1:Are the dumbest widgets you've ever seen, that which are going to be tariffed away by Overlord and Chief anyway. I mean, the whole thing's just absurd. But social media feels absurd. I mean, like, have you spent time on Facebook at all recently?
Speaker 1:no, I don't like it's a, it's a kludge of doom. Um, uh, I mean partly because it's. It has both the same ideological functions as twitter and the same like human connection functions that it used to have, but only for people over 35, and also it streams instagram at you constantly I will not be logging in anytime soon uh, and instagram? I mean instagram is social media. If by social media it's totally algorithmically Controlled and even the stuff that you follow You're not going to necessarily see, even when you try to.
Speaker 2:So have you gotten to the last essay In the book by any chance?
Speaker 1:No, I have not. Let's talk about it, though, because I can just pull it up, but I'll be honest with you, I haven't finished the last bit. So the Holy Spirit and the Machine, the yeah that one's pretty small all right. Our age is one of closed hearts. Why let's go?
Speaker 2:well, I think that. I think that relates to what we were just talking about. Like the social media has closed our hearts to each other, more or less.
Speaker 1:It is an empathy. I mean, it's objectively an empathy killing machine, and I don't always think empathy is good by the way, I am a follower of Paul Bloom on this but it is like the more people interact with social media, their empathy for others goes down dramatically. That's like been studied, so it kind of does fill that out. What do you think that is? Is it like because we're interacting with people solely as signs or like what's going on there?
Speaker 2:I mean, I think it's because, like all the systems around us are designed to orient us towards commodities in one way or another, whether it's because, like all of the systems around us are designed to orient us towards commodities in one way or another, whether it's selling them or making them or like making us into one, like it's it's not designed to create human relations, and this has been like something I noticed very early on. I have a essay with Palladium called like Do you Feel Lonely? It goes into like the centuries-long trends in capitalism that lead to social atomization. From those simple premises and like what I point out in that essay is that, like what we really need, like there's basically two ways that this can go Is that we can use AI like it's, because it's basically kind of a living text that will eventually I know in some ways already is interpolating people eventually and in some ways already is interpolating people it can either be used as a means for people to cope with that social atomization and loneliness, or it will be used, or it could be used to actually be a part of social reproduction in a positive way and open people's hearts to each other.
Speaker 2:I frame this in a um, inherently like religious way. That like this is the function, at least the explicit a function of evangelical religions, of of like trying to um get people to open their hearts to each other in order so they can receive like the religion or whatever, um, but that this like, but that this could be a positive function that ai applies. I'd make a similar argument about blue sky, uh, in a blog on my um, on my sub stack, um that has a choice it can either continue this bullshit or it could be better well, it's honestly I don't know if it has that choice in itself that those are the two paths that are available to it.
Speaker 2:I think that it really depends on more people getting on it and it not just being like a liberal, like melding, like a cauldron of liberalness.
Speaker 1:Well, I mean, that is the one advantage to X and it's the only one is there's still enough of different, even though it's basically true social part two now but there are still. Like left discourse, for example, is actually more far left Discourse is actually more vibrant on X than it is on blue sky, and the reason why is, like blue sky is clearly liberals who don't want opposition and they also don't know how to deal with people who don't oppose, who oppose them but are not conservatives or reactionaries, and they're just like does not compute worldview breakdown, um, um and uh. I mean, I've just noticed it. Uh, even though blue sky is also full of far left rhetoric, it's just it's used in a completely different way. Um, I mean, I do think about the opening your heart functioning, though, for more narrative forms of social media when I think about this.
Speaker 1:Like live journal was was towards the end of his existence, was doing the same kind of policing and stuff, because it was tied to para academics that you saw on tumblr, but early on it was not like that at all and I was. I was like exchanging people in my friends list of all kinds of ideologies and stuff. Uh, based off of common interest because it didn't know how to, hadn't figured out how to commodify itself very well. Yet it's very early on my space, which was more commodified, still couldn't figure it out other than like ways of sharing independent band music or whatever, and so it was a more positive experience. And early Facebook was a more positive experience. And early facebook was a more positive I mean facebook for me was a was a form of community, that where I made lifelong friends until about 2015, and if you look about the changes in the algorithm and what it started changing and doing, that's about the time where that stuff was set in. Now, it started earlier. It starts all the way back in late 2012.
Speaker 1:But, you know, it is set in and those are choices that were made, and, unfortunately, people today think that, oh, this is just what technology was going to do and no, we didn't have to program it that way.
Speaker 1:Like, this is a choice that was made um and not by the users, you know um. Now what's funny about that, however? And blue sky is a as an example of this, our mastodon, which is harder to use, but still is the uterus, is, and this is like a problem of ideology in a way that is not obvious, but when I explain it, it might be, and I think you might agree with me. Here the liberals are recapitulating their experience of their prior ideological battles on blue sky. Therefore, they're replicating some of the things that the algorithm fed on, but they're doing it themselves because that's how they think this is supposed to be done, and um and so they are actually interpolating the commodification of the internet back on themselves without meaning to at all. Like this is not like something that they're saying. Liberals are not setting out to make themselves miserable as much as sometimes it seems like they are, at least not on this.
Speaker 2:I think it's interesting because it's like I'm not sure it's necessarily, they're just doing it because that's how it's supposed to work genuinely been interpolated as a part of their self-conception and identity.
Speaker 1:um that, like this is um, like a role that they play on social media of like of the mob a constant stream of of a mob of woke skulls yeah I mean because the only other place you encounter, that is the, the professoriate in academia, in the humanities you used to get it on tumblr before that, like went to shit right and it would happen like it happened even early on.
Speaker 1:Like it, like it would happen sometimes on live journal, but it was not in the early internet, a predominant form of discourse, and you know. And conversely, this leads to other weird things, like there is this assumption amongst liberals that Substack was inherently reactionary for forever and I was like, but it's just because you're reading, reactionaries on it.
Speaker 1:Yeah you're reading reactionaries on it. Yeah, like I use substack and like, frankly, if you want a blog, it actually gets people to read your blogs, where if you put a blog right now up on wordpress, no one's gonna see it ever yeah um, and part of that's because it gets around how shitty search has become.
Speaker 2:Thank you, google, but nonetheless and the other part that it just gives you a pop-up to put your email in.
Speaker 1:Yeah, and now you're going to get it. You don't have to rely on Google or Facebook or X to give it to you, which is weird, because I told someone like what Substack figured out how to do was to bring Yahoo email groups and blogs back and monetize it like, that's all it did. I mean it does more than that. It does podcasts, it does almost everything Patreon does. But I mean, like it's initial technology was just email newsletter feed, all Yahoo, yahoo groups from like 2004.
Speaker 2:I'm not old enough to remember that Exactly.
Speaker 1:But Substack is a series of old technologies, monetized. That's it, that's all it is. Because I'm like it works, like like wordpress plus yahoo email groups and people like yes, this is brilliant and brand new. And I'm like this is all old internet tech, which is just like um, uh. But I do find that very interesting because it does also show you something about path dependencies that even were like. Like, when path dependencies get laid down like that, something can feel new, even though it's just combining two very old technologies and like putting a paywall there, sometimes like um and I.
Speaker 1:I do find that interesting when we come to things like thinking about innovation or the way language works or all this other stuff, because it's like maybe you don't need to reinvent the wheel and maybe people reinventing the wheel kind of sucks and we could choose a different way if we trained ourselves to do it. And I need to read that essay closely because I often think about that too, because I'm like these technological limitations are in some ways a choice, but people confuse the choices we're making with this technology with the technology themselves, and it leads them to like look on a bad day after. I deal with a bunch of students in particular, I am ned ludd. I will admit that there are days that if you talk to me and you've talked to me, nicholas where I'm like we need to destroy all the thinking machines, I understand it If we're living in this world.
Speaker 2:I understand it.
Speaker 1:But then I remind myself like no, I'm mad at the industry's use of this otherwise useful technology and how it's being picked up and having deleterious social effects. But it is not inherent to the technology, is inherent to the society that the technology exists within. And so then I remind myself, no, like in a different society, llns would be fucking awesome. Like you know, I, I can, I, I can like, even from. Like you know I, I can, I, I can like, even from.
Speaker 1:Like you know, you and I were talking about bullshit, jobs and government clerical work and all this before and how that's like like sorry, graber, those things actually exist for a reason. Even if some of those reasons are stupid. They do actually exist for a reason. They aren't just made up or it's not just a class conspiracy, um, but I will say like there's a whole lot of that bullshit I can now just offload to an LLM and not worry about. Like I can now prove compliance really easily.
Speaker 1:And now it's not going to mean that we're going to have less administrators in school, even though that's what it should mean, but again, that's a social problem. So I think a lot about that and I guess that's a good place to think about, particularly if we, as we move into this time period where, you know, as we talked about in your crisis of the state, most politics right now is in disarray, like I think we can all agree on. Maybe we can't, but it feels like even the MAGA people are beginning to sour on what they're getting for what they thought they were going to get, I mean, you know one.
Speaker 2:One thing I am hopeful about is like I realize this is as I saw how bad like Trump was shitting things up. As I saw how bad like Trump was shitting things up. That like, and also what's happened to Steve Bannon, has become a totally marginalized figure, but who could have been like the Lenin of the right if they had let him? Who had like, who started with his paper but instead of having like a party or organization of his own, was just got subsumed into the Republican Party and got churned out when he was no longer useful. And for the first time, the left is, I think, ahead of the right on certain things. Now, that is contingent. Things could like change totally if there's a real crackdown or what have you. But for the first time, the left is actually ahead a bit on creating that organization, because at the very least we can articulate the strategy of patience and things like that. It doesn't exist on the right. They are like 150 years behind us in that respect.
Speaker 1:Although, interestingly to get here, they actually did that. That's the funny thing. They did actually have a strategy of patience. They didn't articulate it, they just had it.
Speaker 2:But not articulating. It is part of the problem they're in now, right, absolutely.
Speaker 1:Because their ideology precludes what they actually did. So, absolutely because their ideology precludes what they actually did. So so this is part of the problem, where you assume all your subjects are rubes and you don't explain to them, uh, why you're doing things is. Eventually, they don't know why you're doing things and they've drunk the kool-aid and all the people who were using it manipulative manipulatively, because they had a strategy of patients are now dead, and so you know, they're just like why is everything going wrong? Why is it like? Because, well, you all drunk the Kool-Aid, motherfuckers, I don't know why. You believe your own bullshit. And if you and you're so afraid of the people who believe their own bullshit, you won't even try to intervene until it's really bad, um, which, uh, and the other thing that we, the other advantage we do have is that I mean um is that, while I don't think the democratic party is dying anytime today or tomorrow, it is clear that that to a lot of people who, even beyond the bernie situation in 2016, are like, well, we back these guys and they did shit like and it has been hard for them to blame everybody else this time they still try, but it has not stuck, even with, even with people like, even on blue sky, there's a lot more abject anger. Not at me, for once, although there's still people who think that somehow my marginal leftist ass completely destroyed joe biden and ruined everything. I don't know how they think that, but but you know, whatever. But most people on that in that world do get that. It wasn't me, it was you. Now are they self-reflective about it Sometimes, but they at least get it, and I do think that puts us in a different disarray, because I'm thinking about like it's, it's the advantage of not having a demagogic figure, because right now, trump's personality can hold all this together, but he's literally 78.
Speaker 1:He's going to be gone within a decade probably. I mean, you know, whereas we don't have a demagogic figure right now, we actually do have the freedom of like open ideological contestation, do have the freedom of like open ideological contestation and the predominant form of non-managerial liberalism liberalism, aka woke scolding is very unpopular, even amongst formal progressives, uh, and even if they don't admit it, but they don't talk that way, and even on Blue Sky there's less and less of it. So there is space for contestation that we have not had in the past. My only fear is that we get subsumed in the Great Resistance. That's always my fear. But when I push too hard on that, people think that we should tell the other side. So I don't know. I'm always like but no, that's not what I told you to do like. Being independent does not mean doing what the other side of the bipartisan equation we're rejecting, as it should be doing.
Speaker 2:But yeah Sort of inescapable.
Speaker 1:The logic. There's a problem with binary logics in a non-binary world. But yeah, it is sort of inescapable. Nico, where can people find your stuff? Where can they find your book? They can find your blog, at your Substack pre prehistory of the encounter, which I really enjoy and read on a regular basis, but where else can they find your book, your work?
Speaker 2:Well, I appear on cosmonaut regularly. I have some new stuff coming out with them, including a book review on another person's work on AI. I have, and you can find the book itself on Amazon, and the book is called A Soul of a New Type, which I made the cover myself, by the way, so that's where you can find me. And also, I really appreciate you shouting out the blog. I really appreciate your blog, which I read, a few of which were quite well written. I need to read more. It's a Varn blog, yeah, varn blog, yeah, varn blog which when I saw it, I was like I want to say Varn barn.
Speaker 1:I thought about that actually for a second, I was like I want to say Varn Barn. I thought about that actually for a second, but I was just like you know what, now that I've taken this path dependency of giving everything my surname and sounding like a total egomaniac I'm going to continue doing it, because for years I came up with cool, creative names for blogs and podcasts and no one could find them. I'm just like, okay, fuck it, it's my surname. It's a four-letter word, you can figure it out. Eight letters altogether. So, yeah, I thank you for plugging that. I'm writing more. I've been contracted for a book and then I'm writing another book and I realized, uh, I was writing an article for strange matters and the problems of Bordiga and 47 pages into the preamble- and.
Speaker 1:I'm not really joking Um, I realized that I needed to practice short form again because I couldn't say anything without trying to put the entire fucking history of every question that led up to it in the preamble before you even got to the main point, and I was like, okay, we gotta, I gotta, blog again. I haven't blogged in five years. You know, I I used to do book review blogs, um, and I used to actually what my interview, my interviews, actually started on paper, um, and then moved to the internet, um, so, yeah, you can find that. And, uh, thank you for check, check out your sub stack. Yours is free, I think, for the most part.
Speaker 2:Um, you can give me money. Give me money, I'll take it. Yeah, but it's basically all free.
Speaker 1:You should definitely check out your stuff on Cosmonaut. I like Cosmonaut. There's just recent give a plug for Cosmonaut. There's been a recent redesign. I think it's way easier to read now. I thought the other one was pretty but not necessarily reader friendly.
Speaker 2:And I think it's easier to print out pdfs of it, which I appreciate way easier, like the way they used to do titles.
Speaker 1:It would like the title would eat up a page by itself and it would like not be legible uh-huh um, I could.
Speaker 1:I had no idea. Like you could read it on a screen, fine, but the moment you print it out it'd be like class cccccllll and I was like, okay, but yeah, it's easier to pronounce, easier to download as a pdf, it's easier to read. Um, it's a little bit easier to navigate. Even um, it's just modernized. Uh, still has the cool USSR Soviet space program aesthetics. You didn't lose that and yeah. So people check your stuff out there and I endorse those. I don't read a lot of left magazines. I've been reading Cosmonaut Negation magazine out of Canada. I like it a good bit. I used to like Viewpoint, but I don't think it exists anymore. Its website is still up there, but they haven't seemed to put a new article up there since 2022.
Speaker 2:Wow.
Speaker 1:And I like New International and Strange Matters, but because I kind of work with them, so I'm biased on those and I like some articles that jack have been, I guess, but uh, um, uh, yeah, uh, but, cosmonaut, I would tell people to check out and um, uh, I think, uh, I want to have you come back for us to talk about a couple of things. One we still need to do our discussion of critique of the Goethe program and what you think it is.
Speaker 1:But two, I wanted to talk to you about neo reaction and the youth. Sure Cause, you are one of the few other people that me who got obsessed. You read reactionary text, I read reactionary text. I read reactionary text. Most leftists do not read reactionary text because they have a miasma theory of ideology and are afraid if you read it you'll be corrupted, which I'm just always amused by. But you think they're that powerful Like, have you actually read this? They occasionally make good points, but like most of it, you can see right through points. But like most of it, you can see right through um, uh. But I I do want to talk to you about near reaction, because I do think near reaction, if you were paying attention, is the ideology from all the cluster of things we called alt-right. Um, that was going to survive because it flatters tech oligarchs and it is not hostile to religion and it is not inherently.
Speaker 2:Well, it depends what flavor of neo-reactionary we're talking about. There are the Nietzschean vitalists who really hate Christianity, who, in my opinion, are the most evil out of all this bunch.
Speaker 1:Well, I mean, this is a weird thing. This was also true for current traditionalism. There was a Christian or quasi-Christian strain of it and there was a Nietzschean strain of it, and somehow they were allies even though they hated each other's ideology strain of it. And somehow they were allies even though they hated each other's ideology. Um, that's like I think that's happened more than once. It's even happened more than three or four times in history. It's like, why are the nichians hanging out with the christians when the nichians think the christians ruined everything? Um, but you know, that's a in bizarro world popular front. But I do think neo-reaction is important to talk about because I I think people focused, uh, particularly from 2016 to 2020, on the wrong, um far-right ideologies, like they were looking at, uh, racial nationalism, the surviving forms of current traditionalism, national Bolshevism and stuff like that, and and they were ignoring, particularly after Fisher died are only reading the weirdest neo-reactionary people like Nick Land yeah, it's like I understand why that happened, because in many ways those like the neo-reactionary was really a minority.
Speaker 2:the really popular stuff which I was more plugged into at the time was that more alt-right type of stuff. And then, like Trump won and kind of, at the time was that more alt-right type of stuff, and then, like Trump, one kind of took the sail out of that wind or like the wind out of that sail, reverse it, and now the same thing is kind of happening to Neo Reaction right now, which, like I talked a little bit about this on the Cosmo podcast but I'd love to elaborate it more.
Speaker 1:Yeah, I'd love to have you back to talk about that, because I think it's a big deal. Let's preview that. What do you think, very quickly, is taking the wind out of its sails?
Speaker 2:I mean the fact that they played a role in what's happening right now, in that they created and spread this ideology of obedience to this authority, of this big leader or whatever, and this became at least somewhat popular among Republican staffer types, and it's one reason why there's this total deference to Trump now and not taking responsibility for your own things, like bureaucracy and policy and that sort of thing. Now, part of that is also just the normal Republican tendencies of unitary theory, like executive theory or whatever, but they played a role in this too, a real, concrete role, and they're getting like people are realizing how stupid they were for buying into that shit. Well, you know.
Speaker 1:Alexander was recently one of them. Oh man, I remember when scott alexander seemed like a normal rationalist liberal and he would only make fun. Uh, my, my one, uh, run in with scott. Was him using me and actually actively misreading a blog post I wrote about feminism to say that I hated the nerds. What year was? This this is like 2013.
Speaker 2:That's a very 2013 thing to happen, yeah, right.
Speaker 1:Which was weird, because he also, like, he caused my blog to blow up by by critique. I was like, how the fuck did this guy find me? Um, and at the time I had actually only priorly ever said nice things about scott alexander because I liked his mont and bailey description as the way as, like a way a lot of progressive politic, political rhetoric works, because I thought it was accurate, you know like, but uh, things have not gone so well over in rationalist world. It was, it was, it was like when I went on less wrong and I was like, when did you guys move from being like liberal nerds to neo-reactionaries? And uh, you know, because I I followed that in the in the aughts, like I think it was like I first was like reading them like 2009.
Speaker 1:So back in my I hated Sam Harris before you did period of life. I've hated Sam Harris since 2003. It's like my list of like. Oh, you've complained about Matthew and Harris since 2003. It's like my list of like. Oh, you've complained about Matthew and Glacius. I complained about Matthew and Glacius since I was 23 years old and I am 44. But anyway, thank you so much. We'll have you back on. I mean, you are probably I think you might be the person who's come on here the most. It's either you or Elijah and Mary, um, but, uh, but you'll be back, um for sure, and I really enjoy it. But people should buy this book. I endorsed it. Uh, I, I endorse it off of reading the first half, but I endorsed it, so hopefully there's nothing in the second half. I read most of the second half last night, but hopefully there's nothing in the last fourth that I'm like, oh my god, no.
Speaker 2:I can't guarantee it there are some curveballs that I throw.
Speaker 1:Nico really is trying to build Roscoe's Baskalus. And we'll end on there. Thank you so much.