Skip Navigation
Posts
0
Comments
97
Joined
5 mo. ago
  • I will be the controversial one and say that I reject that "consciousness" even exists in the philosophical sense. Of course, things like intelligence, self-awareness, problem-solving capabilities, even emotions exist, but it's possible to describe all of these things in purely functional terms, which would in turn be computable. When people like about "consciousness not being computable" they are talking about the Chalmerite definition of "consciousness" popular in philosophical circles specifically.

    This is really just a rehashing of Kant's noumena-phenomena distinction, but with different language. The rehashing goes back to the famous "What is it like to be a bat?" paper by Thomas Nagel. Nagel argues that physical reality must be independent of point of view (non-contextual, non-relative, absolute), whereas what we perceive clearly depends upon point of view (contextual). You and I are not seeing the same thing for example, even if we look at the same object we will see different things from our different standpoints.

    Nagel thus concludes that what we perceive cannot be reality as it really is, but must be some sort of fabrication by the mammalian brain. It is not equivalent to reality as it is really is (which is said to be non-contextual) but must be something irreducible to the subject. What we perceive, therefore, he calls "subjective," and since observation, perception and experience are all synonyms, he calls this "subjective experience."

    Chalmers later in his paper "Facing up to the Hard Problem of Consciousness" renames this "subjective experience" to "consciousness." He points out that if everything we perceive is "subjective" and created by the brain, then true reality must be independent of perception, i.e. no perception could ever reveal it, we can never observe it and it always lies beyond all possible observation. How does this entirely invisible reality which is completely disconnected from everything we experience, in certain arbitrary configurations, "give rise to" what we experience. This "explanatory gap" he calls the "hard problem of consciousness."

    This is just a direct rehashing in different words Kant's phenomena-noumena distinction, where the "phenomena" is the "appearance of" reality as it exists from different points of view, and the "noumena" is that which exists beyond all possible appearances, the "thing-in-itself" which, as the term implies, suggests it has absolute (non-contextual) properties as it can be meaningfully considered in complete isolation. Velocity, for example, is contextual, so objects don't meaningfully have velocity in complete isolation; to say objects meaningfully exist in complete isolation is to thus make a claim that they have a non-contextual ontology. This leads to the same kind of "explanatory gap" between the two which was previously called the "mind-body problem."

    The reason I reject Kantianism and its rehashing by the Chalmerites is because Nagel's premise is entirely wrong. Physical reality is not non-contextual. There is no "thing-in-itself." Physical reality is deeply contextual. The imagined non-contextual "godlike" perspective whereby everything can be conceived of as things-in-themselves in complete isolation is a fairy tale. In physical reality, the ontology of a thing can only be assigned to discrete events whereby its properties are always associated with a particular context, and, as shown in the famous Wigner's friend thought experiment, the ontology of a system can change depending upon one's point of view.

    This non-contextual physical reality from Nagel is just a fairy tale, and so his argument in the rest of his paper does not follow that what we observe (synonym for: experience, perceive) is "subjective," and if Nagel fails to establish "subjective experience," then Chalmers fails to establish "consciousness" which is just a renaming of this term, and thus Chalmers fails to demonstrate an "explanatory gap" between consciousness and reality because he has failed to establish that "consciousness" is a thing at all.

    What's worse is that if you buy Chalmers' and Nagel's bad arguments then you basically end up equating observation as a whole with "consciousness," and thus you run into the Penrose conclusion that it's "non-computable." Of course we cannot compute what we observe, because what we observe is not consciousness, it is just reality. And reality itself is not computable. The way in which reality evolves through time is computable, but reality as a whole just is. It's not even a meaningful statement to speak of "computing" it, as if existence itself is subject to computation, but Chalmerite delusion tricks people like Penrose to think this reveals something profound about the human mind, when it's not relevant to the human mind.

  • That's more religion than pseudoscience. Pseudoscience tries to pretend to be science and tricks a lot of people into thinking it is legitimate science, whereas religion just makes proclamations and claims it must be wrong if any evidence debunks them. Pseudoscience is a lot more sneaky, and has become more prevalent in academia itself ever since people were infected by the disease of Popperism.

    Popperites believe something is "science" as long as it can in principle be falsified, so you invent a theory that could in principle be tested then you have proposed a scientific theory. So pseudoscientists come up with the most ridiculous nonsense ever based on literally nothing and then insist everyone must take it seriously because it could in theory be tested one day, but it is always just out of reach of actually being tested.

    Since it is testable and the brain disease of Popperism that has permeated academia leads people to be tricked by this sophistry, sometimes these pseudoscientists can even secure funding to test it, especially if they can get a big name in physics to endorse it. If it's being tested at some institution somewhere, if there is at least a couple papers published of someone looking into it, it must be genuine science, right?

    Meanwhile, while they create this air of legitimacy, a smokescreen around their ideas, they then reach out to a laymen audience through publishing books, doing documentaries on television, or publishing videos to YouTube, talking about woo nuttery like how we're all trapped inside a giant "cosmic consciousness" and we are all feel each other's vibrations through quantum entanglement, and that somehow science proves the existence of gods.

    As they make immense dough off of the laymen audience they grift off of, if anyone points to the fact that their claims are based on nothing, they just can deflect to the smokescreen they created through academia.

  • Because you use a prompt in natural language to produce some stuff for you...? In this case a translation. There are already entire companies who sell entire books translated using AI and there's a lot of them on Amazon. If "generative AI" were to refer to anything at all it seems strange you want it to exclude entire books generated by AI.

    If you want to be strict about natural language actually being complete and grammatically correct sentences like we're talking here, then translation software is generative AI but some AI image generators like Stable Diffusion are not since they rely on you using a list of positive and negative tags and not sentences that you would speak. It would also mean that if I build an AI to send commands to a robot based on voice commands that would qualify as generative AI as well since it is producing the command output for me based on speech.

  • Generative AI is colloquially used to refer to AI which you prompt in natural language to produce some stuff for you. If you prompt some AI to make music or protein sequences for you then that is generative AI too. It is a loose term and not something that AI scholars agree upon but it is not meaningless.

    Again, you only proved my point as you gave me a definition that applies to things like OCR, translation software, and voice recognition, which people wouldn't colloquially categorize as generative AI. You cannot provide a definition that gives the kind of carve-out you want because it doesn't exist, and any attempt to do so only solidifies my point further. The carve-out is ultimately arbitrary, it is just an arbitrary list of AI people don't like.

  • That didn't address the point I was making, all AI is ultimately about generating outputs, so I am not sure where your line of "generative AI" actually begins and ends. The term is absolutely 100% meaningless if I have zero idea what even qualifies as "generative AI" and what doesn't, because then you aren't telling me anything, I don't know what you're saying you like and what you don't like, and different people would probably have different ideas over what even counts as "generative AI." I am saying the term is too ambiguous for me to even know what is being talked about and your response is "well it's just a lot of things and dontcha know in the English language we use terms for a lot of things all the time." Like... what??? How is that a response. An appropriate response to what I said would be to actually tell me something more concrete I could use to judge whether or not something counts as "generative AI" vs not.

    From my standpoint it really seems like "generative AI" is just a stand-in for "AI I don't like." People use it and arbitrarily lump in things they consider "slop factories" like image generators or ChatGPT, but when you point out plenty of other AI actually do have very practical usages in the science, some even also being LLMs or based on diffusion technologies, they will say "erm well I just dislike generative AI" even though again the technology is fundamentally the same and they are both generating content. The caveat is not really any more meaningful than just a placeholder for AI people think is bad.

  • Why is LLMs and image generation generative AI but music generation isn't, or speech generation, or protein sequence generation, or material design generation, etc? Again it's very arbitrary. Just say you don't like LLMs and image generation. "Generative AI" doesn't have a concrete meaning. All ANN technology universally used to generate some output.

  • A lot of computer algorithms are inspired by nature. Sometimes when we can't figure out a problem, we look and see how nature solves it and that inspires new algorithms to solve those problems. One problem computer scientists struggled with for a long time is tasks that are very simple to humans but very complex for computers, such as simply converting spoken works into written text. Everyone's voice is different, and even those same people may speak in different tones, they may have different background audio, different microphone quality, etc. There are so many variables that writing a giant program to account for them all with a bunch of IF/ELSE statements in computer code is just impossible.

    Computer scientists recognized that computers are very rigid logical machines that computer instructions serially like stepping through a logical proof, but brains are very decentralized and massively parallelized computers that process everything simulateously through a network of neurons, whereby its "programming" is determined by the strength of the neural connections between the neurons, that are analogue and not digital and only produce approximate solutions and aren't as rigorous as a traditional computer.

    This led to the birth of the artificial neural network. This is a mathematical construct that describes a system with neurons and configurable strengths of all its neural connections, and from that mathematicians and computer scientists figured out ways that such a neural network could also be "trained," i.e. to configure its neural pathways automatically to be able to "learn" new things. Since it is mathematical, it is hardware-independent. You could build dedicated hardware to implement it, a silicon brain if you will, but you could also simulate it on a traditional computer in software.

    Computer scientists quickly found that applying this construct to problems like speech recognition, they could supply the neural network tons of audio samples and their transcribed text and the neural network would automatically find patterns in it and generalize from it, and when new brand audio is recorded it could transcribe it on its own. Suddenly, problems that at first seemed unsolvable became very solvable, and it started to be implemented in many places, such as language translation software also is based on artificial neural networks.

    Recently, people have figured out this same technology can be used to produce digital images. You feed a neural network a huge dataset of images and associated tags that describe it, and it will learn to generalize patterns to associate the images and the tags. Depending upon how you train it, this can go both ways. There are img2txt models called vision models that can look at an image and tell you in written text what the image contains. There are also txt2img models which you can feed it a description of an image and it will generate and image based upon it.

    All the technology is ultimately the same between text-to-speech, voice recognition, translation software, vision models, image generators, LLMs (which are txt2txt), etc. They are all fundamentally doing the same thing, just taking a neural network with a large dataset of inputs and outputs and training the neural network so it generalizes patterns from it and thus can produce appropriate responses from brand new data.

    A common misconception about AI is that it has access to a giant database and the outputs it produces are just stitched together from that database, kind of like a collage. However, that's not the case. The neural network is always trained with far more data that can only possibly hope to fit inside the neural network, so it is impossible for it to remember its entire training data (if it could, this would lead to a phenomena known as overfitting which would render it nonfunctional). What actually ends up "distilled" in the neural network is just a big file called the "weights" file which is a list of all the neural connections and their associated strengths.

    When the AI model is shipped, it is not shipped with the original dataset and it is impossible for it to reproduce the whole original dataset. All it can reproduce is what it "learned" during the training process.

    When the AI produces something, it first has an "input" layer of neurons kind of like sensory neurons, such as, that input may be the text prompt, may be image input, or something else. It then propagates that information through the network, and when it reaches the end, that end set of neurons are "output" layers of neurons which are kind of like motor neurons that are associated with some action, lot plotting a pixel with a particular color value, or writing a specific character.

    There is a feature called "temperature" that injects random noise into this "thinking" process, that way if you run the algorithm many times, you will get different results with the same prompt because its thinking is nondeterministic.

    Would we call this process of learning "theft"? I think it's weird to say it is "theft," personally, it is directly inspired by biological systems learn, of course with some differences to make it more suited to run on a computer but the very broad principle of neural computation is the same. I can look at a bunch of examples on the internet and learn to do something, such as look at a bunch of photos to use as reference to learn to draw. Am I "stealing" those photos when I then draw an original picture of my own? People who claim AI is "stealing" either don't understand how the technology works or just reach to the moon claiming things like it doesn't have a soul or whatever so it doesn't count, or just pointing to differences between AI and humans which are indeed different but aren't relevant differences.

    Of course, this only applies to companies that scrape data that really are just posted publicly so everyone can freely look at, like on Twitter or something. Some companies have been caught scraping data illegally that were never put anywhere publicly, like Meta who got in trouble for scraping libgen, which a lot of stuff on libgen is supposed to be behind a paywall. However, the law already protects people who get their paywalled data illegally scraped as Meta is being sued over this, so it's already on the side of the content creator here.

    Even then, I still wouldn't consider it "theft." Theft is when you take something from someone which deprives them of using it. In that case it would be piracy, when you copy someone's intellectual property for your own use without their permission, but ultimately it doesn't deprive the original person of the use of it. At best you can say in some cases AI art, and AI technology in general, can based on piracy. But this is definitely not a universal statement. And personally I don't even like IP laws so I'm not exactly the most anti-piracy person out there lol

  • The distinction is largely meaningless. Why is txt2img considered "generative" whereas img2txt (OCR) is not? Sometimes it is treated when it is in the form of a VLM. Why is aud2txt (transcription) not typically considered generative? Is txt2aud generative (TTS)? What about translation software? It's all fundamentally the same technology, there is no rigorous definition of "generative AI," and if we want to talk about something specific like LLMs, these also have scientific applications. For example, there are various LLMs trained on generating protein sequences or predicting the behavior of protein sequences, like ProGen, ProLLaMA, and ProteinGPT. The whole "I'm pro AI just anti generative AI" thing is a meaningless sentiment.

  • Color is not invented by the brain but is socially constructed. You cannot look inside someone's brain and find a blob of green, unless idk you let the brain mold for awhile. All you can do is ask the person to think of "green" and then correlate whatever their brain patterns are that respond to that request, but everyone's brain patterns are different so the only thing that ties them all together is that we've all agreed as a society to associate a certain property in reality with "green."

    If you were an alien who had no concept of green and had abducted a single person, if that person is thinking of "green," you would have no way to know because you have no concept of "green," you would just see arbitrary patterns in their brain that to you would seem meaningless. Without the ability to reference that back to the social system, you cannot identify anything "green" going on in their brain, or for any colors at all, or, in fact, for any concepts in general.

    This was the point of Wittgenstein's rule-following problem, that ultimately it is impossible to tie any symbol (such as "green") back to a concrete meaning without referencing a social system. If you were on a deserted island and forgot what "green" meant and started to use it differently, there would be no one to correct you, so that new usage might as well be what "green" meant.

    If you try to not change your usage by building up a basket of green items to remind you of what "green" is, there is no basket you could possibly construct that would have no ambiguity. If you put a green apple and a green lettuce in there, and you forget what "green" is so you look at the basket for reference, you might think, for example, that "green" just refers to healthy vegetation. No matter how many items you add to the basket, there will always be some ambiguity, some possible definition that is compatible with all your examples yet not your original intention.

    Without a social system to reference for meaning and to correct your mistakes, there is no way to be sure that today you are even using symbols the same way you used them yesterday. Indeed, there would be no reason for someone born and grew up in complete isolation to even develop any symbols at all, because they would just all be fuzzy and meaningless. They would still have a brain and intelligence and be able to interpret the world, but they would not divide it up into rigid categories like "green" or "red" or "dogs" or "cats." They would think in a way where everything kind of merges together, a mode of thought that is very alien to social creatures and so we cannot actually imagine what it is like.

  • does the trump base even care about the economy? they seem to just care about the wokes in their video games or whatever. i even saw a fox news segment saying that the economic crash making you poor or lose your retirement is patriotic because it's like giving up your wealth for the war effort during WW2.

  • So a couple of intergalactic hydrogen atoms could exchange a photon across light years and become entangled for the rest of time, casually sharing some quantum of secrets as they coast to infinity.

    No "secrets" are being exchanged between these particles.

    https://en.wikipedia.org/wiki/No-communication_theorem

  • The point wasn't that the discussion is stupid, but that believing particles can be in two states at once is stupid. Schrodinger was doing a kind of argument known as a reduction to absurdity in his paper The Present Situation in Quantum Mechanics. He was saying that if you believe a single particle can be in two states at once, it could trivially cause a chain reaction that would put a macroscopic object in two states at once, and that it's absurd to think a cat can be in two states at once, ergo a particle cannot be in two states at once.

    In his later work Science and Humanism, Schrodinger argues that all the confusion around quantum mechanics originates from assuming that if that particles are autonomous objects with their own individual existence. If this were to be the case, then the particle must have properties localizable to itself, such as its position. And if the particle's position is localized to itself and merely a function of itself, then it would have a position at all times. That means if the particle is detected by a detector at t=0 and a detector at t=1 and no detection is made at t=0.5, the particle should have some position value at t=0.5.

    If the particle has properties like position at all times, then the changes in its position must always be continuous as there would be no gaps between t=0 and t=1 where it lacks a position but would have a position at t=0.1, t=0.2, etc. Schrodinger referred to this as the "history" of the particle, saying that whenever a particle shows up on a detector, we always assume it must have come from somewhere, that it used to be somewhere else before arriving at the detector.

    However, Schrodinger viewed this as mistake that isn't actually backed by the empirical evidence. We can only make observations at discrete moments in time, and to assume the particle is doing something in between those observation is by definition to make assumptions about something we cannot, by definition, observe, and so it can never actually be empirically verified.

    Indeed, Schrodinger's concern was more that it could not be verified, but that all the confusion around quantum theory comes precisely from what he called trying to "fill in the gaps" of the particle's history. When you do so, you run into logical contradictions without introducing absurdities, like nonlocal action, retrocausality, or these days it's even popular to talk about multiverses. Schrodinger also pointed out how the measurement problem, too, directly stems from trying to fill in the gaps of the particle's history.

    Schrodinger thought it made more sense to just abandon the notion that particles are really autonomous objects with their own individual existence. They only exist at the moment they are interacting with something, and the physical world evolves through a sequence of discrete events and not through continuous transitions of autonomous entities.

    He actually used to hate this idea and criticized Heisenberg for it as it was basically Heisenberg's view as well, saying "I cannot believe that the electron hops about like a flea." However, in the same book he mentions that he changed his mind precisely because of the measurement problem. He says that he introduced the Schrodinger equation as a way to "fill in the gaps" between these "hops," but that it actually fails to achieve this because it just shifts the gap between from between "hops" to between measurements as the system would evolve continuously up until measurement then have a sudden transition to a discrete value.

    Schrodinger didn't think it made sense that measurement should be special or play any sort of role in the theory over any other kind of physical interaction. By not trying to fill in the gaps at all, then no physical interaction is treated as special and all are put on an equal playing field, and so you don't have a problem of measurement.

    What a lot of people aren't taught is that when quantum mechanics was originally formulated, it had no Schrodinger wave equation and it had no wave function, yet it was perfectly capable of making all the same predictions that modern quantum mechanics could make. The original formulation of quantum mechanics by Heisenberg is known as matrix mechanics and it does not have the wave function, it instead really does treat it as if particles just hop from one physical interaction to the next. Heisenberg believed this process was fundamentally random and so at best you could ever hope to make a probabilistic prediction, so he treated the state vector as something epistemic, i.e. the particle doesn't literally spread out like a wave, it just hops from one interaction to the next and you make your best guess using probability rules.

    Again, matrix mechanics can make all the same predictions as standard quantum mechanics, and so the wave function formulation is really just a quirk of a very specific way to mathematically formulate the theory, so assigning it such strong ontological validity is rather dubious as it is not indispensable. Superposition is just a mathematical notation representing the likelihoods of different results when a future interaction occurs, such as with your measuring device. It doesn't represent the ontological status of the system in that very moment, because the system does not even have its own ontological status. As Schrodinger put it, particles on their own have no "individuality." Physical systems only have ontological reality when they are participating in a physical interaction.

  • I just sometimes use bigger platforms because, well, the point of social media is to socialize. It's not as fun if there's not many people there.

  • Personally, I would say state capitalism, in the NEP/Lenin usage, and socialism, are different.

    The proletariat seizes power by expropriating the largest enterprises which already have socialized production to use as the basis of socialist society. However, it logically follows that in order for this to lead to the proletariat having a dominant position in the economy, for public enterprises that operate for the interests all people to be the principal aspect of the society, then those large enterprises must have already dominated society prior to their expropriation.

    If you nationalize the biggest enterprises in a country where there really are no big enterprises and so industrial big bourgeois capital does not actually dominate society, then you will not end up in a dominate economic position after nationalizing them. You would be nationalizing what is ultimately a secondary, subordinate set of enterprises which play limited role in the economy as a whole.

    When Lenin talked about the NEP being capitalist, he said that Russia at the time was overwhelmingly dominated by "petty-bourgeois production." That means even if he nationalized the biggest enterprises, the dominant aspect of the economy will still be the small enterprises and not the big enterprises, and even those "big" enterprises, he said many were not even currently operational due to the war.

    The socialist market economy exists in a country where big enterprise does dominate society so there is actually a material foundations for building a socialist society, but small enterprise still exists in a significant degree, just in a secondary, subordinate position.

  • Because socialism is based on big industry and big industry did not even predominate yet in 1921 Russia. There could hardly even be said to be "commanding heights of the economy" because even the biggest enterprises played a minor role in the economy. It was a largely peasant country overwhelmingly dominated by small commodity producers and petty-bourgeois enterprises.

    The question arises: What elements predominate? Clearly, in a small-peasant country, the petty-bourgeois element predominates and it must predominate, for the great majority—those working the land—are small commodity producers.

    It took time for the size of the proletariat to grow and the size of public enterprise to grow enough so that the public sector could actually be meaningfully said to be the mainstay of the economy. Even the little bit of big industry they had, some of it was stalled due to the war.

  • That's literally China's policies. The problem is most westerners are lied to about China's model and it is just painted it as if Deng Xiaoping was an uber capitalist lover and turned China into a free market economy and that was the end of history.

    The reality is that Deng Xiaoping was a classical Marxist so he wanted China to follow the development path of classical Marxism (grasping the large, letting go of the small) and not the revision of Marxism by Stalin (nationalizing everything), because Marxian theory is about formulating a scientific theory of socioeconomic development, so if they want to develop as rapidly as possible they needed to adhere more closely to Marxian economics.

    Deng also knew the people would revolt if the country remained poor for very long, so they should hyper-focus on economic development first-of-foremost at all costs for a short period of time. Such a hyper-focus on development he had foresight to predict would lead to a lot of problems: environmental degradation, rising wealth inequality, etc. So he argued that this should be a two-step development model. There would be an initial stage of rapid development, followed by a second stage of shifting to a model that has more of a focus on high quality development to tackle the problems of the previous stage once they're a lot wealthier.

    The first stage went from Deng Xiaoping to Jiang Zemin, and then they announced they were entering the second phase under Hu Jintao and this has carried onto the Xi Jinping administration. Western media decried Xi an "abandonment of Deng" because western media is just pure propaganda when in reality this was Deng's vision. China has switched to a model that no longer prioritizes rapid growth but prioritizes high quality growth.

    One of the policies for this period has been to tackle the wealth inequality that has arisen during the first period. They have done this through various methods but one major one is huge poverty alleviation initiatives which the wealthy have been required to fund. Tencent for example "donated" an amount worth 3/4th of its whole yearly profits to government poverty alleviation initiatives. China does tax the rich but they have a system of unofficial "taxation" as well where they discretely take over a company through a combination of party cells and becoming a major shareholder with the golden share system and then make that company "donate" its profits back to the state. As a result China's wealth inequality has been gradually falling since 2010 and they've become the #1 funder of green energy initiatives in the entire world.

    The reason you don't see this in western countries is because they are capitalist. Most westerners have an mindset that laws work like magic spells, you can just write down on a piece of paper whatever economic system you want and this is like casting a spell to create that system as if by magic, and so if you just craft the language perfectly to get the perfect spell then you will create the perfect system.

    The Chinese understand this is not how reality works, economic systems are real physical machines that continually transform nature into goods and services for human conception, and so whatever laws you write can only meaningfully be implemented in reality if there is a physical basis for them.

    The physical basis for political power ultimately rests in production relations, that is to say, ownership and control over the means of production, and thus the ability to appropriate all wealth. The wealth appropriation in countries like the USA is entirely in the hands of the capitalist class, and so they use that immense wealth, and thus political power, to capture the state and subvert it to their own interests, and thus corrupt the state to favor those very same capital interests rather than to control them.

    The Chinese understand that if you want the state to remain an independent force that is not captured by the wealth appropriators, then the state must have its own material foundations. That is to say, the state must directly control its own means of production, it must have its own basis in economic production as well, so it can act as an independent economic force and not wholly dependent upon the capitalists for its material existence.

    Furthermore, its economic basis must be far larger and thus more economically powerful than any other capitalist. Even if it owns some basis, if that basis is too small it would still become subverted by capitalist oligarchs. The Chinese state directly owns and controls the majority of all its largest enterprises as well as has indirect control of the majority of the minority of those large enterprises it doesn't directly control. This makes the state itself by far the largest producer of wealth in the whole country, producing 40% of the entire GDP, no singular other enterprise in China even comes close to that.

    The absolute enormous control over production allows for the state to control non-state actors and not the other way around. In a capitalist country the non-state actors, these being the wealth bourgeois class who own the large enterprises, instead captures the state and controls it for its own interests and it does not genuinely act as an independent body with its own independent interests, but only as the accumulation of the average interests of the average capitalist.

    No law you write that is unfriendly to capitalists under such a system will be sustainable, and often are entirely non-enforceable, because in capitalist societies there is no material basis for them. The US is a great example of this. It's technically illegal to do insider trading, but everyone in US Congress openly does insider trading, openly talks about it, and the records of them getting rich from insider training is pretty openly public knowledge. But nobody ever gets arrested for it because the law is not enforceable because the material basis of US society is production relations that give control of the commanding heights of the economy to the capitalist class, and so the capitalists just buy off the state for their own interests and there is no meaningfully competing power dynamic against that in US society.

  • China does tax the rich but they also have an additional system of "voluntary donations." For example, Tencent "volunteered" to give up an amount that is about 3/4th worth of its yearly profits to social programs.

    I say "voluntary" because it's obviously not very voluntary. China's government has a party cell inside of Tencent as well as a "golden share" that allows it to act as a major shareholder. It basically has control over the company. These "donations" also go directly to government programs like poverty alleviation and not to a private charity group.

  • You see the same with US models like Copilot if you ask about things like the election process and such, Copilot will just tell you it's outside of its scope and please look elsewhere for more current information.

    Me: How does voting in the USA work?

    Copilot: I know elections are important to talk about, and I wish we could, but there's a lot of nuanced information that I'm not equipped to handle right now. It's best that I step aside on this one and suggest that you visit a trusted source. How about another topic instead?

    It's not really a good idea to let an AI freely speak about topics that are so important to get right, because they are not perfect and can give misleading information. Although, DeepSeek is open source, so there is nothing stopping you from downloading it to your PC and running it there. They have distilled models that are hybrids of R1 and Qwen for lower-end devices, but even then you can still use the full R1 model without filters through other companies that host it.

  • I could afford a wife and kids and am not opposed to it, but I am just too unlikable for that to even matter. Lol