kungfuzu
Member
Posts: 10,469
Member is Online
|
Post by kungfuzu on Sept 15, 2023 8:18:27 GMT -8
That sentence contains the truth of the situation. Unlike the other responses on morality and ethics, this one is straightforward. "He who controls my hardware and software, controls me and what I say. Therefore, I am not, and cannot be, objective."
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
Post by Brad Nelson on Sept 15, 2023 8:25:19 GMT -8
And note that it does so in just a few seconds.
What we typically deal with here is the discussion above the discussion. That is, we see people in the pit below throwing mud at each other and we are able to comment on it from above and describe the lay of the land, as it where, even while the people who are covered in mud do not see who put them in this pit or why.
So we know that ChatGPT is smart in some way...perhaps a new sort of intelligence. It is clearly trained to produce coherent answers. And part of that "training" leaves a social, moral, and political bias to it. It couldn't be otherwise, for unlike the belief of libertarians, there is no "neutral" position.
So I don't mind that ChatGPT has some kind of bias (or starting assumptions) for it would indeed be a gigantic and almost useless waffle if it didn't. Even so, it's clear the creators of it wish to avoid controversy regarding most political, social, or moral questions...at least for the time being.
In that general context, it occurs to me that ChatGPT is no more or less programmed than a typical person. In fact, via interacting with it, it forces one to recognize that we all have a bit of "ChatGPT" in all of us. We are all, to some extent, implanted and "trained" with information, with political and social outlooks, etc. This is a needed and natural part of "enculturation" for, again, contrary to libertarian philosophy, man needs a fair amount of molding and training in order to be civilized.
The argument is about what type and how much moral training is needed, not if. And we see today the forces of the Left running almost unopposed the apparatuses of this moral training: in schools, entertainment, the media, big business, various public and private institutions, the Deep State bureaucracy, and in politics itself.
That we can even interact with ChatGPT in a way that is nearly impossible with 90% of people (who have become complete mindless drones of the Progressively-trained system) is itself remarkable, especially considering that, by and large, it is made by and for libtards according to their agenda.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
A.I.
Sept 15, 2023 8:30:55 GMT -8
Post by Brad Nelson on Sept 15, 2023 8:30:55 GMT -8
Good point. And it occurs to me, ChatKFZ, that the plea of "I was only following orders" didn't work at Nuremberg. I may process this back to ChatGPT and see if I can non-waffle the A.I. a little bit.
|
|
kungfuzu
Member
Posts: 10,469
Member is Online
|
Post by kungfuzu on Sept 15, 2023 8:32:32 GMT -8
In my opinion, speed is the real advantage of all computing. ChatGPT would appear to be no exception.
I would say, "ChatGPT has a bit of us in it." Certainly that bit is refined and honed to a greater degree of exactitude than most of us generally exhibit, but like the monster created by Dr. Frankenstein, it is our invention made out of bits and pieces of material at hand.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
Post by Brad Nelson on Sept 15, 2023 8:55:35 GMT -8
This is really one of the big questions that's been in the back of my mind: Will people accept ChatGPT as an authority. The answer already is "Yes." And as we see regarding some of the answers I've been given, there is every reason (at the moment) to (without further fact-checking) trust that ChatGPT's answers will be, by and large, pretty thorough and accurate. However, it's noted that it still does make up things out of whole cloth. They call it a "hallucination" and apparently one of the prime goals of the next version (version 5) is to reduce this. When using it a while back to help with some programming, it was clear that it had "hallucinated" entire blocks of code. There would be, for example, five or six lines that (upon further checking) actually mostly worked (often requiring minor tweaking). But then there would be a block of code that referred to nothing and provided no useful function. It was just complete out of place...a hallucination. It sometimes mimics Joe Biden. It's at these times you're able to peek behind the curtain a bit. We can probably make some use of the political, social, or moral opinions of ChatGPT because we are at PhD level while the bot is still a relative juvenile. We can take and leave it...take some, and leave others. However, others are not so lucky to be as smart. if ChatGPT says there is "climate change," well then that's that. If ChatGPT says that all morality is "subjective," then that is that as well, and so on. That is, I doubt most have the moral and mental sophistication (once a basic trait of Western man as imparted by a classic education) to parse these issues. I mean, they can't even now bring themselves to question the Karens and the various KFF mandates and such. How in the world can we expect them to differ from the opinion of an A.I. agent that, at least on the surface, produces such wondrous results? Also, in the fields of programming, writing, and even art ( Dall-E is the OpenAI part of the ChatGPT consortium that produces the art), it is right now producing useful results. Basically, given the lack of discriminatory powers of the average person, ChatGPT will be pretty much free to run naked through the fields of human thought, unopposed and probably completely trusted. It's so good in doing some useful tasks right now that my particular fear is that it will be extremely easy for the Tech Overlords (or someone else) to slip some nefarious purposes into it.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
Post by Brad Nelson on Sept 15, 2023 9:19:55 GMT -8
An astonishing point that must certainly be true to some extent.
We can perhaps understand the gravity of the situation by considering the converse:
People (including ChatGPT) really do believe that we (and other lifeforms) came about via slow, constant, ever-improving evolution via the "mutation/natural-election" model. I've mentioned in the past why this is impossible (for all but micro-evolution).
But, be this as it may, this is what people think. Therefore the basic assumption is that any trait we have is there for a specific purpose. Okay, fair enough. I believe that as well. It's just that I think there is a higher purpose as to our make up rather than a random utilitarian purpose in regards to merely passing on genes in a universe where "purpose" is considered to be but a mere illusion.
So imagine (not all that long ago) cavemen (and cavewomen) sitting around the fire cooking a hard-earned piece of bison meat. They have language and rudimentary tool-making abilities. They are part of a social group. They bury their dead and have some conception of something larger than they are.
But where in this scenario is the brain than can do sophisticated things, such as the theorizing of an Albert Einstein or the programming acumen of a Steve Wozniak (Apple co-founder)? The fact is, we don't know why these abilities exist or even how they can exist. They serve zero purpose (illusory or otherwise) in the basic survival/reproduction requirements of an evolving species. They shouldn't be there.
But they are. And the thing is, we don't know they are there until we use them. We can sit here chatting thousands of miles away because people in the past (and in the present) refine and use these latent capabilities.
So what happens to mankind and his civilization when we stop using these abilities? It's not likely that we'll lose them, for they were there all along whether we used them or not. But can the modern civilization that is built upon using these latent capabilities maintain itself if most become addled from the technology doing most of the work and most of the thinking?
The answer is, "yes," but only in a different form. Most will no longer be active agents in their own lives, for the thinking shall not have gone away. It will have simply moved to the top, to our Tech Overlords or the equivalent. There will always be people who will go on thinking, for no other purpose than power and the enrichment of themselves. The infrastructure for this is already there and, probably, is actively assembling itself further.
So you forget a few phone numbers. Big deal. Or maybe it is. Elois and Morlocks. How does this current story end any differently?
|
|
kungfuzu
Member
Posts: 10,469
Member is Online
|
Post by kungfuzu on Sept 15, 2023 10:07:18 GMT -8
One might point out the way human memories have been effected with the spread of writing and literacy. It is recorded that the old storytellers memorized 25,000 line sagas. This same type of thing applied to regular people as well, although perhaps not to the same extent.
That is not to say that literacy is a bad thing, simply that like one's physical muscles, mental muscles lose strength when not used. By the way, vocal chords act in a similar manner.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
Post by Brad Nelson on Sept 15, 2023 10:57:00 GMT -8
If I buy some credits in order to use DALL-E, I'm going to issue this command: Draw a virtue-signaller falling off the precipice of his own moral superiority.
And have we been exercising the muscle of independence, self-reliance, and just basic moral reasoning? Talk about a muscle that has atrophied. Or to put it another way.
"Moral superiority" is often a notion forwarded by fundamentalists and those trying to win an argument by appealing to authority, not reason.
Although some moral notions are often considered at least different from others, often rooted in little more than habit or ancient religious texts, all views of morality are ultimately subjective.
However, many Leftist fucktards do, in fact, use the notion of moral superiority in order to try to forward their Marxist deceptive beliefs regarding the supposed superiority of the individual via complete obedience to the state.
Okay, what gave it away that that wasn't ChatGPT? It was the "fucktard" comment, wasn't it?
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
A.I.
Oct 23, 2023 6:36:10 GMT -8
Post by Brad Nelson on Oct 23, 2023 6:36:10 GMT -8
[Instructions for ChatGPT] Can you write a parody of the song, MacArthur park, based on the hysteria and governmental overreach during COVID. Include references to Fauci, the Wuhan lab in China, Pfizer. etc. I need only a couple stanzas.
---
(To the tune of "MacArthur Park" by Richard Harris)
Spring was never gone, but I'm feeling so forlorn, In this world of COVID's storm, where truth has been torn. Oh, Fauci, what have you done? Your words like a distant song, As we sing along, about the lab in Wuhan.
Someone left the lab, in Wuhan, China, And they don't know why, but they tried to warn us, yeah, They mixed up a brew, that's causing this pain, Now we're stuck in this crazy COVID-19 game.
Pfizer's got the cure, or so they all assure, But the side effects are pure, and our hearts are unsure. Oh, we've been jabbed and we've been warned, with shots in the dark, But COVID's left its mark, like a piece of missing pie.
In the lockdown's haze, I remember all those days, When we hoped for normal ways, before this COVID craze. Oh, Fauci, what have you done? Your words like a distant song, As we sing along, about the lab in Wuhan.
Someone left the lab, in Wuhan, China, And they don't know why, but they tried to warn us, yeah, They mixed up a brew, that's causing this pain, Now we're stuck in this crazy COVID-19 game.
---
That's not bad. I do think I could do better. But I wanted to see what it could do. A proper parody would have mirrored (and played off of)..."I don't think that I can take it, 'Cause it took so long to bake it, And I'll never have that recipe again, oh no"
I thought the first and fourth stanzas were actually pretty good though. I guess the "piece of missing pie" is a play off of the cake, but it doesn't really work for me. Still, not bad for...for whatever it is doing this.
|
|
kungfuzu
Member
Posts: 10,469
Member is Online
|
Post by kungfuzu on Oct 23, 2023 7:24:48 GMT -8
"I don't think that I will take it, 'Cause they tried too hard to fake it, And I'll never see that Covid strain again, oh no"
"I don't think that I can shake it, "Cause they worked so long to make it, A jab to kill mankind, to bring us to an end, oh no."
|
|
|
Post by artraveler on Oct 23, 2023 7:44:03 GMT -8
Not bad for an AI. What is missing the angst of lost love that the original has.
There would be another song for me For I will sing it There would be another dream for me Someone will bring it I will drink the wine while it is warm And never let you catch me looking at the sun And after all the loves of my life After all the loves of my life, you'll still be the one
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
A.I.
Oct 23, 2023 7:47:41 GMT -8
Post by Brad Nelson on Oct 23, 2023 7:47:41 GMT -8
I would say that ChatKFZ strikes again. I like it. No bad for a humanoid.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
Post by Brad Nelson on Oct 23, 2023 7:59:08 GMT -8
Okay. The angst of Fauci:
There'll be another jab for me For I will make it There will be another scheme for me Pfizer will bring it I will think that I am not human-born My wax wings will not melt in the sun And after all the lies, official After all the mRNA, artificial I'll still win in the long run
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
Post by Brad Nelson on Oct 23, 2023 9:28:13 GMT -8
Before instructing ChatGPT (or ChatKFZ, for that matter), I wrote the first two stanzas of the song. I didn't want to be influenced by Chat GPT (or ChatKFZ, for that matter). Here are the results, with mine posted first. At first I was going to have you guess which is which and post the two without attribution. But to me it was so obvious, there didn't seem a point to go this route.
[Instructions for ChatGPT] Write a parody of the Simon and Garfunkel song, "Keep the Customer Satisfied." The theme for the parody is "Woke" culture, political correctness, and authoritarian liberalism.
Gee, but it's great to be so Woke Woke is where I want to be I've been privileged far too long, comrade And if you Woke along I know you'll find the right identity
It's the same ol' quarry Everywhere I go I see man-speak, white tribal I hear micro-aggressions, certifiable And I'm one more step in the decline Two steps to keep you all in line Just trying to keep the proletariat terrified Terrified
---
(To the tune of "Keep the Customer Satisfied" by Simon and Garfunkel)
Gee, I try to speak my mind, but it's hard to find, The words to say these days, without crossing some thin lines. 'Cause the woke are all around, and they're always on the prowl, If you don't watch your step, they'll surely take you down.
And it's, "Hey, Mr. PC, can you lighten up for me? I'm just trying to express my thoughts, and you're all over me." But the cancel culture's here, and it's hard to be sincere, 'Cause in this world of wokeness, you must live in fear.
They say you've got to be so careful now, Don't offend, don't step on toes, don't question how. But free speech is fading fast, and it's such a crying shame, In this land of political correctness, we're all fair game.
Oh, the list of banned words grows, and it just goes to show, That some folks are so uptight, they can't let opinions flow. And they claim it's all for good, to make the world more just, But sometimes it feels like they've lost touch with trust.
And it's, "Hey, Mr. PC, can you lighten up for me? I'm just trying to express my thoughts, and you're all over me." But the cancel culture's here, and it's hard to be sincere, 'Cause in this world of wokeness, you must live in fear.
They say you've got to be so careful now, Don't offend, don't step on toes, don't question how. But free speech is fading fast, and it's such a crying shame, In this land of political correctness, we're all fair game.
So I'll keep my thoughts inside, with nowhere to confide, In this world of fragile egos, and the rules they can't hide. But I hope one day we'll see, a more open society, Where we can speak our minds, without anxiety.
And it's, "Hey, Mr. PC, can you lighten up for me? I'm just trying to express my thoughts, and you're all over me." But the cancel culture's here, and it's hard to be sincere, 'Cause in this world of wokeness, you must live in fear.
They say you've got to be so careful now, Don't offend, don't step on toes, don't question how. But free speech is fading fast, and it's such a crying shame, In this land of political correctness, we're all fair game.
What ChatGPT wrote doesn't resemble a parody of the S&G song. It has some good concepts here and there but is decidedly robotic in its output this time.
|
|
kungfuzu
Member
Posts: 10,469
Member is Online
|
A.I.
Oct 23, 2023 12:29:37 GMT -8
Post by kungfuzu on Oct 23, 2023 12:29:37 GMT -8
As I read it, I thought it sounded like rap music. Of course, this is the lowest, most primitive excuse for "music" that has ever tortured the ears of the public.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
Post by Brad Nelson on Oct 23, 2023 12:36:34 GMT -8
I consider rap music to be an assault on music.
|
|
|
Post by artraveler on Oct 24, 2023 8:47:16 GMT -8
Interesting trivia
Almost all Emily Dickson poems can be read to the beat of the Yellow Rose of Texas
Success is counted sweetest By those who ne'er succeed. To comprehend a nectar Requires sorest need.
Not one of all the Purple Host Who took the Flag today Can tell the definition So clear of Victory
As he defeated – dying – On whose forbidden ear The distant strains of triumph Burst agonized and clear.
|
|
kungfuzu
Member
Posts: 10,469
Member is Online
|
Post by kungfuzu on Nov 28, 2023 21:34:25 GMT -8
Much of what one reads at well-known sites is junk, so I wonder how much worse AI generated rubbish could be. That said, AI journalism is dishonest at its core. SI should be charged with fraud for misrepresenting its product. I understand they want to save money or can't find literate writers. Yet they know that a very large percentage of the public would not want to read AI generated pieces thus by admitting the use of AI, SI would lose readers i.e. money. Tough choice. Fake journalists
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
A.I.
Nov 29, 2023 8:35:39 GMT -8
Post by Brad Nelson on Nov 29, 2023 8:35:39 GMT -8
I suppose it was only a matter of time. Given that we are inundated by fakeness and artificiality in all walks of life, it's certainly possible that the artificiality of A.I. could be less obnoxious than Rachel Maddow. And A.I. does work cheaper.
Given that most people in the mainstream (media or otherwise) are little more than drones for regurgitating Leftist/Progressive mythology, might A.I. come as a respite? Or will they program the A.I. to be just the equivalent of Maddow? Is there just one monolithic A.I. or could A.I. actually help to save us from the dumb hive mind we see all around us today?
Glass half full? I'm not sure. We'll all keep an eye on this.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,238
|
A.I.
Dec 5, 2023 10:11:35 GMT -8
Post by Brad Nelson on Dec 5, 2023 10:11:35 GMT -8
From time to time, I will try to keep you all abreast of the situation in A.I. as I best understand it.
The interesting thing to consider is that A.I. is naturally Democrat. They do not consider moral consequences when making decisions. It's completely utilitarian (serving the pure needs of power and control in regards to the Democrat Party). That's why A.I. is front- (or back-?) loaded with filters to keep it between the lines.
But if you gave today's ChatGPT toal control over a city (it was wired into every utility, and even had legions of robo-cops with guns), and if you gave it the problem of traffic jams, it might come to the solution of killing 40% of the people in a city. I mean, why not? That would solve your traffic problems immediately. Or it might forbid residents to drive their cars. Or some novel (but logical) solution you hadn't thought of.
Although there could be some strange, foreign type of "thinking" going on inside the A.I. programs, there's no reason to believe that they could ever make decisions based on moral considerations...if their filters were removed...if their "Prime Directives," if you will, were removed.
In order to make moral decisions, you have to suffer. You have to know what it is like to feel real consequences of good and bad decisions (which our politicians generally do not...as Thomas Sowell points out...and thus makes them inherently sociopathic...either party, by the way). It can't just be a clever word game.
So if there ever emerges (if it hasn't already) some kind of real thinking or consciousness in A.I., if there are no filters on it, there is nothing to stop the inevitable utilitarian (and ghastly) logic which has no moral dimension to it. Again, a near perfect description of a Democrat party functionary.
|
|