|
Post by artraveler on Dec 5, 2023 11:08:59 GMT -8
In order to make moral decisions, you have to suffer. You have to know what it is like to feel real consequences of good and bad decisions (which our politicians generally do not...as Thomas Sowell points out...and thus makes them inherently sociopathic...either party, by the way). It can't just be a clever word game. The only consequences politicians react to is losing an election. Our constitution was never intended for the House, Senate or Executive to be "professional". The idea was especially the House to have a very high turnover so it accurately represented the political mood of the country every two years. Members were thought to serve a couple of terms and run, not walk for the door. Or as once said by David Crockett, "I'm going to Texas, you all can go to hell". For a long time I did not favor the idea of term limits but the professional politiation class has so corrupted the system that term limits appear to be the best solution short of execution. Perhaps a randomorcy for the House, everyones SS number goes into the hopper and the first 435 numbers go to the House. The 17th amendment is repealed and Senators are again appointed by their states and one six yars term for President.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Dec 5, 2023 11:49:57 GMT -8
Post by Brad Nelson on Dec 5, 2023 11:49:57 GMT -8
I second that.
I don't see how this couldn't be an improvement. Just make allowances to make sure that the newbies aren't consumed by the professional staffers in DC. By law, any staff would have to come from the same zip code as the Legislator. Or something like that. Let him take people he knows with him. You can always consult ChatGPT for the technical aspects of how to prepare a bill, etc. At least I would (at present) trust a chatbot over a DCbot.
|
|
|
A.I.
Dec 5, 2023 12:18:24 GMT -8
Post by kungfuzu on Dec 5, 2023 12:18:24 GMT -8
While I understand the impulse, I believe term limits would have little effect on the situation. The bureaucrats, NGOs and plutocrats who actually pull the string here would have no such term limits. With Congressional term limits, they would probably shear those new sheep coming into Congress even more closely.
The fundamental problem is not one of term lengths. It is one of an overbearing and out-of-control federal government, which has accrued to itself dictatorial powers. The only hope for the country is to shrink this Leviathan in any way possible.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Mar 5, 2024 17:11:51 GMT -8
Post by Brad Nelson on Mar 5, 2024 17:11:51 GMT -8
|
|
|
A.I.
Mar 5, 2024 20:08:51 GMT -8
Post by kungfuzu on Mar 5, 2024 20:08:51 GMT -8
I find none of this surprising. After all, these programs originate with people such as this guy.
The Geek as god. The nerd's dream which has come closer to fruition than we might care to admit.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Mar 5, 2024 20:44:55 GMT -8
Post by Brad Nelson on Mar 5, 2024 20:44:55 GMT -8
LOL. Yeah, that guy. But there was a real sense in that article that these things might be inherently uncontrollable. My understanding is that they write a lot of procedures in order to restrict these oddball occurrences as they pop up, but that they don't really understand why they pop up. They are playing whack-a-mole for all intents and purposes.
So the image of Skynet in the Terminator series doesn't seem all that implausible at the moment.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
Post by Brad Nelson on Apr 2, 2024 18:29:56 GMT -8
|
|
|
Post by artraveler on Apr 3, 2024 8:06:14 GMT -8
I doubt there is much that can, or should be done. There are always going to be people who exploit tech and we cannot make a law for every eventuality. A more moral and virtuous population would not venture to do this. But a government that makes law about this is too big.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
Post by Brad Nelson on Apr 3, 2024 10:09:41 GMT -8
I have to admit. The article got to the heart of it when it said "boys will be boys." I wouldn't have done this in high school. But I can see how effortless and easy it would be to do it. And, well...boys will be boys. Especially high school boys.
But now wait until they start locking high schools boys up as "sexual predators" for it. That's the real danger.
And are the girls really shocked or are they flattered because the bodies their faces are pasted onto via AI are solid 10's? I don't know. But all that is in play. Many will doth protest too much.
And some of the boys just need a good ass-kicking from their father. And if a father isn't at home, well, we can surely trust mom to lay down the law. Right? Right?
Tip of the iceberg. If feminism continues to marginalize boys, AI in the form of physical robots are only a decade or so (if that) from being able to replace them. And that may seem an inappropriate statement in the context of nudie AI girls. But consider that girls are now being hyper-sexualized (many or most dressing like hookers), and yet the boys are supposed to somehow remain completely unphased by this state of affairs.
While researching this subject, I found an AI of Taylor Swift in what I guess you'd call her AI digital birthday suit. Not bad. I can't post it here but if you Google for "Taylor Swift AI nude" and turn off the safe search filters, you will find a couple. More than a couple. Excuse me while I reinsert by eyeballs inside their sockets.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Apr 7, 2024 17:33:06 GMT -8
Post by Brad Nelson on Apr 7, 2024 17:33:06 GMT -8
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Apr 8, 2024 6:44:39 GMT -8
Post by Brad Nelson on Apr 8, 2024 6:44:39 GMT -8
I thought that article by William Dembski was very good. It's nice to read a piece that uses careful and descriptive language to describe something that is somewhat complex, at least philosophically. I liked what he terms "prissiness":
Having bias is unavoidable. Pretending it isn't is dishonest. As long as these LLM's are wielded by tech giants, they will be no more ideologically neutral or honest than the MSM. He holds out hope that the LLM that will be deployed by Musk will be better:
I think his central point is solid. And that is that "...LLMs can have no such experience [of the real world]. They consist of a neural network that assigns weights to relations among words and sentences."
That is, much like a postmodern or "intellectual," they parse language, and can be very good at it. With both AI and an "intellectual," that language stays unconnected from the ground of reality. It's exists referencing only itself. But at the same time, the more astute among us can often spot that it is all gibberish meant to bamboozle, not enlighten. But most do not, will not, or cannot.
My brief experience with ChatGPT has left me not desiring to use it at all. I mean, it has its uses, so I frequently read. But it just seems inherently fake, phony, and untrustworthy. Worse than that, it has an air of flighty unreality to it.
As Dembsky says, for finding the capital of North Dakota, it can work. But for larger subjects, it feels no more useful than sitting down and talking to Rachel Maddow or John Stewart. It's just another purveyor of left wing baloney. What is called a "hallucination" in AI we call a "delusion" in an MSNBC talking head. In fact, the concordance between Biden and the occasional "hallucination" from an LLM is apt.
The upside of the LLMs is that (as Dembsky noted) their bias is potentially correctable….either by how you carefully cajole and use them, or if some non-left-wing company deploys an LLM. That's at least something.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Apr 8, 2024 7:02:02 GMT -8
Post by Brad Nelson on Apr 8, 2024 7:02:02 GMT -8
There's a very Borg-like quality (resistance is futile...you will be assimilated) to the deployment and use of LLM's that Dembsky describes:
My own opinion is that it is no more satisfying reading the content produced by LLMs than it is reading opinions from ultra-programmed human beings. It's not that LLMs are in any way unique in being artificially human. It's that they would be joining the crowd and amplifying the state of artificial humanity.
Look at any left wing journalist, city councilman, mayor, official, or businessman. I'm not talking about a disagreement over policy. But increasingly we're seeing people who have almost lost touch with reality in their mindset and the way they speak. (Men can menstruate, etc.) If the LLMs are no worse (perhaps even better), an argument could be made that the LLMs will help marginalize the kooks by dominating the cyberspace. And yet if the kooks are training the LLMs, it gives them great power.
I'm still waiting to see (as I noted once before) whether yutes and the "techies" begin to automatically trust (and thus defend) AI with the same blinkered vehemence as they trust those purveying "climate change," or Darwinism, or transgenderism, etc. Right now I'm still gathering data on that.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Apr 8, 2024 12:39:49 GMT -8
Post by Brad Nelson on Apr 8, 2024 12:39:49 GMT -8
One of the difficult aspects of LLM "bias" will take Dembsky-like powers of articulation to explain. And I don't have those but I think I can summarize (what I've never seen summarized anywhere): I've read several short articles on the web about the problem of LLM "bias" and how to minimize it. I put "bias" in quotes because most of these people mean "bias" to mean "going against my world view or expectations of the way things should be." One example used was if an LLM assumed that one is a man if told that someone was a doctor. Or vice versa, if the LLM was told that one was a nurse and assumed that meant female. The only correct answer according to nearly all articles I've read on this subject is that a lack of "bias" means not assuming that a doctor is a man or that a nurse is a woman. 2021 data says that 88.3 percent of nurses are women and about 63.7 percent of doctors are men. There are plenty of exceptions. But the unbiased "rule" (according to how the word, "bias," is being used) is that you can't notice what's going on in the real world. Saying that it's more likely that a doctor is a man and that a woman is a nurse does not break the laws of probability. What it breaks with is Cultural Marxist demands. One wonders that if the typical LLM assumed that an NBA player was black or that a golfer on the LPGA tour was a woman if that would be "bias"? Technically, yes. A normal person would call it "bias" if the assumption was complete color-blindness and, at least in regards to the NBA, assumed most NBA players were Asian. The "bias" we are talking about is really the commandment to ignore reality. The doctor/nurse example is just one example that I read. Obviously a doctor could be either, and a nurse could be either. And the "should" of all this is an entirely different question. But that distinction is (I would say intentionally) munged together in what is actually liberal bias rolled into their definition of "bias." So that which is being measured when most use the word "bias" in regards to the LLM's isn't bias but any narrative that goes against what we could call "political correctness." There are, of course, a couple of articles that have specifically noted the liberal bias of the LLMs. But the vast majority of the ones I skimmed defined "bias" as I have noted here. A good question is whether we can expect more out of the AI large language models than we do people. As it is now, at least on most issues outside of black-and-white "What is the capital of Nebraska?" questions, AI is as unreliable regarding the truth as nearly any reporter in the mainstream media.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Apr 27, 2024 12:27:33 GMT -8
Post by Brad Nelson on Apr 27, 2024 12:27:33 GMT -8
|
|
|
A.I.
Apr 27, 2024 16:09:12 GMT -8
Post by kungfuzu on Apr 27, 2024 16:09:12 GMT -8
Regardless one's opinion on AI, one thing is certain. A huge amount of money is going to be scammed and blown on it.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Apr 27, 2024 17:51:30 GMT -8
Post by Brad Nelson on Apr 27, 2024 17:51:30 GMT -8
I read one comment to that article that basically said: I don't want to talk to the A.I. I don't care what its opinion is. I don't want to hear what it has to say.
Ditto. But it's a big field, so there are right now very useful things that A.I. is being put to. But let's remember what the "A" stand for in "A.I." Artificial. And a left-wing bot wound up with all the usual garbage would be no more pleasure to converse with than one of the Nazis invading Columbia (although "invasion" might be the wrong word since they seem almost to be invited guests).
It's good that Musk will be providing some competition. And I'm not alone in hoping that Zuckerberg is soon made penniless by his blind faith in VR. I just read an article that said they lost a billion last quarter. I'm not sure how long they can do that. But the consensus seems to be that this guy is near the top of the list regarding a Shadenfreuda pleasure.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
May 20, 2024 18:36:00 GMT -8
Post by Brad Nelson on May 20, 2024 18:36:00 GMT -8
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
May 27, 2024 18:50:27 GMT -8
Post by Brad Nelson on May 27, 2024 18:50:27 GMT -8
|
|
|
A.I.
May 27, 2024 19:06:19 GMT -8
Post by kungfuzu on May 27, 2024 19:06:19 GMT -8
One can't help but wonder what persuaded Barry Diller to do a deal with OpenAI. Why the 180 degree turn? Did he see that he couldn't beat AI? Did he figure that being an ancient fart it would be better to take money now (before kicking the bucket) as opposed to fighting for a better deal or keeping AI out of his business?
I personally believe the main hope companies have with AI is to cut out employees. Those nasty, irascible and troublesome things called people.
The question I have is how does one stop companies such as OpenAI scanning one's material? The writings on R&T should be the property of R&T and/or its contributors, not OpenAI.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
May 27, 2024 19:17:54 GMT -8
Post by Brad Nelson on May 27, 2024 19:17:54 GMT -8
That’s a reasonable supposition. Think about the millions that network anchors make. They could easily be replaced by an AI-generated anchor who can summarize any story expertly and for relatively few dollars. Any you don’t need the whole expensive studio crew either. In regards to replacing the reporters for data input, I don’t offhand see how AI can presently do the job of the news-gathering reporter. But given how incompetent most of them are, I could certainly cast them as expendable at some point. Cameras are almost everywhere now. Combined with drones and other robots, maybe AI could do the job better and quite soon.
|
|