ARTOFFICIAL INTELLIGENCE?

Okay, it is MASTERPEACE MONDAY. The time is 6:24 p.m., Monday, November 27, 2023. Today's topic of the day, the topic I want to talk about today, is AI and the many, many, many ramifications that come with it. Where to even begin on my thoughts about this?

So, I guess I would describe it to people who aren't my peers, who aren't as caught up on the latest news on AI and such, in this way. Everything that you used to read about in sci-fi movies, sci-fi shows, dystopian video games, and such, when they kind of talk about the beginnings of AI and how we started to use it, and we started to marvel at how amazing we were for creating it, and then it tells this overarching idea that AI is going to take over. Not saying that it will in the future, but we are in the beginning stages of that, let's just say it.

The most well-known, the one that people have talked about the most, I would say, is probably the apple of the AI race right now: OpenAI and ChatGPT. They really became, from my understanding, very popular around the second iteration of it, and they also had a text-to-image generator where you basically describe what you wanted to make, and it would generate forward energy from that. There are some other various ones that came about like Midjourney and such, but OpenAI, ChatGPT is the one that a lot of people have at least heard about.

My thoughts on it, right? As someone who is a lover of technology and has an engineering mind — I was schooled from sixth grade all the way until I graduated high school with an engineering-focused curriculum. What that meant is that engineering was a really big part of our curriculum from sixth grade. So, the principles of engineering: What does it mean to be an engineer? What does it mean to break down a problem? And the various subsets of that, like computer science and computer engineering.

However, I am also a very philosophical person because, at that same school, philosophy was a very big part of our education. We took philosophy classes from sixth grade all the way into 12th, right? That was part of our curriculum. And so, like my school had a philosophy journal that we produced, and by the time I graduated, it was winning awards and stuff.

So my approach to everything that's been coming out has been that of an observer, right? I believe that AI has great potential to change the world for the better. It is a very powerful tool if you break down what the definition of intelligence is and what that means in relation to humans, what that means in relation to animals that we think are less intelligent than us. Like, how do you measure intelligence, right? And then we have artificial intelligence, intelligence that we created, but it surpasses us, right?

The way that I see it is from a very fundamental coding perspective, where it's like, what do you need to define and command the program to do in order for it to get to some level of intelligence, right? Where it's not just following a program, where it's actually processing information and then making an informed decision from that. So I look at it from a coding perspective, but I also look at it as, what can this do for humanity?

We see that it seems like when we discover a new technology, the first thing that we try to do is weaponize it. Like, how can we exploit this, right? But I guess also, too, from a basic evolutionary standpoint, the only reason why, based off of evolution, what they say, right? The only reason why we would go from like mindless apes, or whatever, right? Ancient homo sapiens before they became homo sapiens, right? When they first like took a bone or a branch to kill an enemy, right? We see that in '2001: A Space Odyssey,' where it's like you had the creatures or whatever, and they were beefing with each other, and one killed the other and used a tool. So what they're saying is our first form of evolution, which started to distinguish us from one another in the power struggle, is through like using something to harm another.

So does, is the only way that we're able to discover something new and come together when we want to harm something, right? We talk about artificial intelligence. Right now, it's just really text-based stuff, basic information that it can process and then give you. It can process a whole lot, but it's not automatic. It's not like sending emails for you; it's always requiring an input from you. What they're trying to do is automate everything and create artificial general intelligence, where it can start to act on its own. That's what culturally we think of as artificial intelligence, right? So that's what the goal is, but I feel like the only way that we're really going to start to get there is once people start to militarize it, which the government, our government, has already pretty much decided to do.

And it's like the thing I'm disappointed in, the progression of all of this is, and it's been prophesied in books and whatnot, or whatever, alluded to in books. There are these very moments, but it's like nobody's thinking of the consequences. Like nobody's really stopping to think about the consequences, the people who are in power, because they're so quick, so quick to advance it and to exploit it to the best of their ability, right? Instead of trying to sit back and learn how we can go forward with this with intention and not just, 'Oh, we're creating a product to deliver to the masses, and then oops, we might've created the first real artificial intelligence that can think and act on its own.' I don't think that's very smart.

Civilizations of the past respected the thinkers, the people who could think through scenarios like this and really understand the implications of what it could do for society, both good and bad, what it could do for the earth, both good and bad. Basically, the thinker's job was to just think about all the possibilities. You have philosophers who subscribe to a certain way of thought based off of facts and evidence that they've collected from the real world and are able to comprehensively explain to somebody to where they can understand and shift their perspective. And it's like nobody's having these conversations on the grand scheme. It's all about, well, how can we control this and basically make it not turn on us if it ever gets to that point, right? But just like, why isn't anybody just stopping to really take a pause on this?

But it's like, no, you're not going to if you're making money from it and that's your directive or whatever. Of course, you're going to try to advance it as much as possible in as little time as possible, especially to beat the competition.

There's a whole thing that went down with OpenAI because they're owned by Microsoft. Microsoft purchased them for a heck of a billion dollars. I forget what the cost was, but there was essentially a hostile takeover or coup in the organizational structure of OpenAI, the company that created ChatGPT. And then Microsoft stepped in and basically reinstated the people that they fired, that got fired just out of the blue. And it was all unfolding on X, and it was like, yo, this sci-fi stuff is really happening in real life. And it's like, you guys should be, the people who are in charge of this stuff should be focused on, if they're really saying that they're focusing on the advancement of society, right? All of society. That means it has to be without bias. That means it has to be informational, right? And not just something where it's like, 'Oh, it could do this for you. It could do that for you.' The moment you start exploiting something is when it starts messing up the entire balance of everything. But it's like, nobody wants to work with the things to move things forward.

You have access to almost a really large portion of recorded human data in all of history. They train the bots, essentially, the program on large amounts of data. So it's like, it can make connection points because intelligence is really just making a connection point, a mental connection point of stored information in order to solve a problem, right? It has a quicker than human mind speed of accessing information. And then because it's all digital and it's all just data anyways, it has almost instant access to that information. With humans, we have to think, and maybe we might remember, and maybe it'll be all the facts. But it'll be inconsistent every single time, right? With AI, it's not. So why aren't we using that to really progress society? Again, if that's what we're saying, what it's about, or whatever, let's figure out how to solve homelessness. Let's figure out a new way of living so that the world's resources, the entire earth's resources, are more evenly distributed. Maybe we can do it down with [ a bear]. You know what I'm saying?

There are ways that I feel like we can progress, but nobody is really stopping to even think through it because they're so busy trying to progress it as quickly as possible and to be the first to write the history books. So it's like, what's the end result? What's the end result?

Previous
Previous

The Fear of Selling

Next
Next

Virgil Abloh Passing and Words of Encouragement