|
Post by kungfuzu on Sept 7, 2023 15:41:33 GMT -8
As a father of someone who is seriously handicapped, he was thought by teachers at school to be autistic, I find it interesting that so many of these guys appear to revel in their "autism," real or imagined. In fact, I would guess that many of those claiming to be on the spectrum have no idea what-so-ever what autism really is. They claim to be on the spectrum because it makes them special, or it gives them some excuse for being an asshole. You know, they just couldn't help themselves for doing this or saying that. Maybe I am wrong, but I doubt it. Being a homo sapiens, particularly a male, often entails being an asshole. To control oneself, to control one's behavior can be difficult. I suspect it becomes more difficult the less personal contact one has with others. Being a clod is quite natural for many, if not most, males, unless they want something. Once that something is attained, they generally go back to being a clod. Perhaps the modern promiscuity of women has led to the increasing cloddom(?) of males.
|
|
|
A.I.
Sept 7, 2023 16:13:33 GMT -8
Post by kungfuzu on Sept 7, 2023 16:13:33 GMT -8
100% agree. Furthermore, from the way the voice described it, I think there is a great danger that because of the huge computing power behind this technology the public will see the results spewed out by this program as the truth. This will lead to even more brainwashing.
One point which the voice passed over very quickly is that this technology "mimics understanding." I think that is crucial. It doesn't actually understand anything and anyone using it should realize that they need to approach things from a number of different directions before deciding on what is or isn't correct.
I must admit, I have never understood how Google gained such market dominance. I used AltaVista and Netscape Navigator(?) as well as other search engines in the past. I didn't find Google to be any better. I refuse to use Google today and I think it is an evil company. It not only sucks up all information from your computer usage, it is a brainwashing machine which influences its users to go to certain preferred sites, purposely neglecting other sites. This is done for financial as well as political reasons.
Unfortunately, Microsoft is as bad or worse than Google. The best thing that could happen is for both to get in such a vicious competition that they destroyed each other. And while this was going on, other smaller firms picked up the slack and thereby dispersing the power of the technoverse across a multitude of companies. I am not holding my breath for such an outcome.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
Post by Brad Nelson on Sept 7, 2023 16:20:42 GMT -8
I find it offensive.
I agree with all those reasons. Let's count up what we have so far:
1) Lack intestinal fortitude (aka they are a Snowflake) 2) Wish to portray themselves as a victim 3) Wish is apologize for or escape from their "toxic" masculinity 4) Want to be "special" 5) Want an excuse to be an asshole. (I love this one. I don't know why it didn't occur to me.) 6) Related to #4: By announcing that they are wounded and vulnerable, they are playing into the "nurture" instinct of women (and by the same token, avoiding their wrath which they perceive they would get from acting male...see #4)
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
Post by Brad Nelson on Sept 7, 2023 21:09:47 GMT -8
It's interesting that ChatGPT shares many attributes (think of that quote by the PhD statistician that KFF posted) with highly-motivated, intelligent, socially awkward, and morally superficial men or young men.
Without a doubt, a good computer programmer is a marvel. Make no mistake. They are the true (and new) keepers of the temple. And that temple is the computer. Those who know it best can run the world (Google, Apple, Microsoft, Facebook).
Just because we don't like some things about AI or those who produce it, we shouldn't let is taint our opinion of the near genius (or at least high skill) of many of these programmers.
But at the end of the day, "All algorithms reflect the bias of their programmers, whether the programmers are conscious of it or not."
I think it's safe to say the A.I chat bots (such as ChatGPT) will, at least in the short run, be enormously talented devices – but also socially and morally awkward ones. We can look for them for helping with writing or with code. But only the tattooed crowd of The Golden Children who pray at the Temple of Tech are naive enough to believe that moral and social questions can be answered by a machine.
That said, one programmer online thinks that ChatGPT, and systems like it, will put a lot of junior programmers out of work just as every other technological advance has impacted a certain sector of the economy.
Surely that will be true. By sheer chance, today I ran into mention of Charlotte Brontë's (author of Jane Eyre) novel, Shirley. It's set in the early 19th century in Yorkshire. This period sees the rise of the industrialization of the manufacture of textiles as well as the resistance to the factories by the Luddites. I found the book at the local online library and have read the first few pages.
The message regarding AI replacing some jobs is not all bad, for if programming becomes less expensive to implement, that could well mean better and cheaper products. I suppose given the Linux model of an open source operation system, if there was an open source A.I. chat bot, it could have the potential of competing with, if not taking down, some of the tech giants. No wonder Microsoft all but subverted what started out as an open source A.I. project meant to "benefit all of humanity without consideration of profit." Well, that didn't last long.
The good news is that there at least will be competition in the short term among the tech giants who will all try to monopolize this ChatGPT type of A.I. which has everyday uses for the everyday man. I've even used it, although the programming it spits out now is highly faulty.
But it will get better. And people are already using it to write papers, make web pages, etc. Oh, yes, without a doubt, this will lead to the further dumbing-down of humanity. The only evolutionary story (or conjecture) that seems sensible to me is that they say that the domestication of the dog coincides with the reduction of brain size (as measured by sheer cubic-inch capacity of the skull) by 5%. The theory is (and it sounds completely reasonably to me, if ultimately unprovable) that man's abilities atrophied because dogs could hunt, guard the camp, etc. And if you think about it, to the extent that micro evolution can happen, what else could drive the kind of high-brain agility and hand-eye coordination that humans have (especially males) than the act of hunting?
If you look at how adherence to competition via the American Kennel Club has dumbed down dogs (choosing for looks rather than utility), it's not hard to imagine a 5% drop. This breeding has also made the dogs unhealthy in many ways.
We might step back to 1980 and drink the kool-aid of Apple Computer and supposed that these new computer tools will empower the individual to creative heights. Indeed, if you look at how they are used in art, science, and the film industry, this is so for a great many people.
But, by and large, unless your definition of "empowering" is very liberal, staring into your phone screen for hours on end clicking and swiping at mere nothings can hardly be called facilitating people to reach their creative heights. Instead (and as we know from insider info at Facebook and other "social" media companies), this stuff is made specifically in order to get you to be mindlessly addicted to the tech.
One can only imagine the brain shrinkage that will occur from virtual reality, AI, and other such things. As now, even in the midst of stupor, most will think themselves the height of intelligence because someone (the media...or more likely the Tech Lords) are telling them that they are (for holding specific political opinions, etc.).
I can foresee the playing out of a common sci-fi dystopia theme. We could easily become all but enslaved to the technology by Morlocks who view us (and treat us) as Elois. And can you blame them?
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Sept 8, 2023 16:44:37 GMT -8
Post by Brad Nelson on Sept 8, 2023 16:44:37 GMT -8
One of the bits of info I ran into while researching ChatGPT, and the Tech Giants in general, was a video that talked about how the World Wide Web had basically become various zones of Google. Google is the 2000 lb. Gorilla that drives the web at the moment. They make money by selling your business being inserted into the top search results. That is (in case you need me to spell it out), it's "search" engine is a gigantic fraud. You're not searching the web like you would search for a word in a dictionary. You are being steered toward the results that someone has paid big money to steer you toward. This isn't news, but it's worth pointing out. And it's interesting this opinion I ran into that said the drive for good placement in Google's search engines drives much of the web. The process/software is generally called SEO – search engine optimization. There was a component to that when I used WordPress to publish StubbornThings. If you wanted any chance at all of being found by Google's search bots (and we're talking about non-paid-for search results), you had to fiddle with a bunch of stuff behind the scenes. The assertion (true, of course...it's only the amount one might quibble about) of this one poster was that the drive for optimum placement in Google's search engines has created an entire fake web, more or less. There are automated processes that have businesses producing thousand of pages and web sites with the sole purpose of gaming Google's search bots and Al-Gore-rhythms. They flood it with a topic or content in a way that improves their "SEO' scoring and thus the likelihood that their product or service will be discovered via a search. Whether this is in conjunction with paying Google, or a way to avoid paying, it did not say. You probably have already seen this. You've searched for some topic, perhaps some "how to" topic. And you run across a whole bunch of sites with pretty much boilerplate text. A common occurrence is if you're, say, looking for reviews of air conditioners. You will find all kinds of "review" pages that are little more than ranked links to the products at Amazon.com. And I'm not saying that Amazon has anything to do with this, although they could. But it's clear this is automated content with some purpose that is not straightforward or particularly honest. Just as there is "fake news," there is "fake web." And there is probably much more of the latter.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Sept 10, 2023 17:22:30 GMT -8
Post by Brad Nelson on Sept 10, 2023 17:22:30 GMT -8
Maybe we need some artificial intelligence because regular intelligence sometimes just doesn't cut it. First off, don't believe the fable that because they are SSD's (solid state drives) with no moving parts that they are extra reliable. I had my main SSD on my Mac shit its pants yesterday. Out of nowhere it just became so corrupted that it wouldn't even boot. I learned of the corruption first when I tried to do a backup of the entire drive via Carbon Copy Cloner. And Carbon Copy Cloner squawked that my hard drive was too corrupted for it to work with it (although I had seen no symptom before this).
I back up the entire hard drive this way with Carbon Copy Cloner every so often just so I have an exact mirror image of my entire system (programs, files, system, everything). Then should a drive shit its pants (or just be accidentally erased), I can be back up and running quickly. You just boot from the cloned drive and you're right back into the exact system (files, programs, everything).
And so I was. I also have Apple's Time Machine running so it does an incremental backup during each day of one gigantic folder that has all my work files in it. So that's doubly backed up. Unfortunately, it had been 3 weeks since I did the entire mirror backup of the drive via Carbon Copy Cloner so the last couple weeks of emails are lost, but that's no big deal. And I did have to enter in about two or three weeks of checks and deposits in my bookkeeping app because it's not backed up incrementally as with all my work files. Why not? Because those bookkeeping files exist inside a Mac OS9 emulator (SheepShaver...that's its name) and those files are in the applications folder, not with my work files (the ones that get incrementally backed up via Time Machine several times each day). No excuse. I should have done the mirror backup sooner. I try to do it every week but it's been slow and so the need seemed less urgent or it just slipped my mind. Strangely, the SDD that failed had two partitions on it. The other partition (with the Mac OS "High Sierra" system on it) was fine and checked out okay when I ran Apple's Disk Utility on it. That the other partition (which had my operating system, "Mavericks," on it and everything else) went tit's up. That could be a sign of hardware failure that could raise its ugly head again even though the corrupt partition reformatted just fine. I'll keep my eye on it. I have no idea how the drive got corrupted in the first place. Maybe it was listening to too much MS-NBC. So being the diligent sort (better late than never), it occurred to me that I didn't have a backup of my Windows 10 PC. And I do a lot of stuff on that. My iMac I keep for business purposes only, so I run no games on it or anything like that. I don't want to take the chance of mucking it up. So my Windows 10 machine is for gaming...and it's almost my main internet machine. I do all my fiddling with retro computing emulators on it as well. So it's got a lot of stuff on it that. Should this SSD go tit's up, it would be difficult to set it all back up again as it was. So I had an extra 1 terabyte Samsung SSD sitting around (who doesn't?) and decided it was time to back up the Windows 10 SSD. I've used the Samsung cloning software. It was the same software that I had used before when I cloned my existing mechanical hard drive to the SSD and then made that SSD the startup disk. And now that I think back, I do think I had some small problem in that process. More later. Which brings us to the meat of the story: It is difficult to believe that professional software or hardware companies could be so incredibly stupid. Long story short: Upon successfully cloning my solid state hard drive to another solid state hard drive (which was easy to do), the computer shuts down (which you are told it will do at the start of this process). Now, the thing is, it would not start up again. In no way was my original boot disk messed with. It was the disk that was copied from, not to. So what gives? Twice upon a failed startup I got this message from Windows that said it shit its pants and couldn't boot so would you like to try A, B, C, or D? And, believe me, What the Fuck! Was at the top of my mind. Here I am doing do-diligence and assholes at both Samsung and Microsoft are having their way with me. So here's what happened. In the process of cloning, the Samsung software makes an exact duplicate of whatever disk you are cloning. In the Mac world, no one cares. Both or all disks are, of course, going to show up on your Desktop, whether cloned or otherwise. It's what an OS is for. Show the damn disks on my Desktop and don't fuck with me. But Windows and Samsung are run by fucktards. What happened was that because both drives were listed as the startup "C:" drive, Windows shit its pants. It couldn't start up. So answer me this: 1) Why can't Windows just choose a volume to start up with and just ignore the other disk? 2) Why does Samsung create a software tool that, if you keep your cloned disk attached to your computer on the next startup, it will keep the machine from actually starting up? Assuming the fucktards at both Microsoft and Samsung hadn't actually did anything to my computers original copied-from startup drive, I make the logical assumption that having that second cloned SSD plugged in was jamming things up. I had it plugged into an external USB port anyway so it was no problem to unplug it. Sure enough, the computer booted just fine. Once booted, I then plugged the external SSD in and Windows recognized it. But it didn't mount it. (Those are indeed two different things. Don't ask me why.) After digging into some settings, this is when I learned that both drives had been made the "C:" drive. Windows just wasn't going to mount it. I Googled the answer and found an intelligent soul out there who had the answer. The correct answer is at the very bottom of the page. Ignore the fucktards giving all kinds of really bad advice to fix this. The problem was simply to right-click on the little "Offline" icon and choose "Online." Jesus H. Christ. And Windows couldn't do this on its own automatically? And why was this setting so hidden? I clicked all over the same dialogue box before and had found nothing. But thankfully that one answer from Lead3 uncovered the riddle. Simple when you know the trick. And these idiots at Microsoft have the temerity to invest billions in "artificial intelligence" when their idiotic operating system can't even mount a hard drive and handle the simplest of hardware conflicts? I mean, say what you will of the homosexual who runs Apple and how over-priced their stuff is. It just doesn't exist in the Mac world where you could clone a drive and then have that clone keep the computer from booting. This would be so laughably stupid in the Mac universe that it would be considered near blasphemy (because it would be so "Windows-like"). Anyway, it's truly shocking sometimes to discover there is sometimes nobody at the wheel who is awake. There is zero reason for Samsung's software and Microsoft's software to behave in this way. And I do remember earlier when I cloned my mechanical hard drive to the SSD (a couple years ago) that I had the problem with the one or the other not being mounted. But I don’t think I had any startup issues at the time. It boggles the mind that such bad software can exist for so long. This is not a small bug.
|
|
|
Post by artraveler on Sept 10, 2023 18:13:21 GMT -8
I had my main SSD on my Mac shit its pants yesterday. By an odd coincidence my Mac died last week during a download of software update. My service guy determined that the HD had SMART errors and was not repairable without replacing the HD. So, I fired up the Apple Card and bought a new one. I use two back up drives and iCloud so nothing was lost. I also back up important docs to my PC via iCloud. The new MAC is faster then then the old one. I will probably have the HD replaced on the old one and gift it to one of the grandkids. My MAC guy says $350-450 is a fair price considering the tight space they have to work in. I wonder if there was a bug in the download that caused the error?
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Sept 10, 2023 18:36:04 GMT -8
Post by Brad Nelson on Sept 10, 2023 18:36:04 GMT -8
My best guess (from experience) is that nothing stresses a drive liking doing a software update.
A friend of mine gave me a PowerBook G4 Titanium last year. I had mentioned that on the retro thread. It's a really nice old retro machine.
But when I was updating it to a newer version of Mac OS X, the drive shit its pants. Whatever is involved in software updates, it certainly involves chugging the hard drive very very hard.
I suspect that's what happened in your case. I'm glad you had it all backed up and it was only an inconvenience.
So I take it that you got an entire new machine. What did you get? And what is your old one that needs a hard drive?
|
|
|
Post by artraveler on Sept 10, 2023 19:35:22 GMT -8
So I take it that you got an entire new machine. What did you get? And what is your old one that needs a hard drive? Got a new 24 inch desktop with apple chips and 8 GB. Loaded the Ventura 13.5.2 OS. The old Mac was only four years old but did not have the apple chips. When I have the HD replaced I'll ask if replacing the chips with apple is advisable. They are much faster. I used my Apple Card for the purchase as for apple products they forgo interest on the loan. With credit rates at over 20% on most cards that's like a gift of about $300 off the price. I bought my last phone the same way.
|
|
|
A.I.
Sept 10, 2023 20:15:47 GMT -8
Post by kungfuzu on Sept 10, 2023 20:15:47 GMT -8
For some reason, I have always thought the main advantage of SSD's was they were not bothered by frequent moments of the computer. This would be good for people who move around a lot with computers when they travel, etc.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Sept 10, 2023 21:27:53 GMT -8
Post by Brad Nelson on Sept 10, 2023 21:27:53 GMT -8
That might be a factor. But it's interesting to note that the early iPods had a physical hard drive in them. And from what I understand, they held up very well. I really don't know how they avoided head crashes and all that. I'm sure the failure rate of SSDs is less than a mechanical hard drive. But it's obvious they do fail. You can find some waffle info here. Maybe. Maybe not. Depends. Nearly everything I found in a quick search points to this study, so not a lot a data points at the moment, as they say. But it doesn't matter. Hard drives are going the way of the Dodo. SSDs are faster, cheaper (or getting there), and smaller. Even if the reliability was the same, that would give them an edge. They likely draw less power as well, a yuge consideration for laptops.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Sept 10, 2023 21:32:28 GMT -8
Post by Brad Nelson on Sept 10, 2023 21:32:28 GMT -8
Okay. Gotta ask what color? Pink? LOL. I don't think that's an option. I see you getting the blue or maybe the green. I do sort of like the yellow.
|
|
|
Post by artraveler on Sept 11, 2023 7:56:07 GMT -8
Blue and white just like my Israeli flag.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
Post by Brad Nelson on Sept 11, 2023 8:30:35 GMT -8
Excellent. I also noticed there was silver which would be a good neutral choice. But you're not by any means neutral, especially about Israel, so I like the blue. The mother of the girl my younger brother lives with went on a spending spree last fall and bought herself and her two grandsons a 24" iMac (her daughter got a new laptop, and Bryan got an iPad Air...generous grandmother). I forget which colors they had. But I got to see one close up and that's a really nice computer. It's fast with a crystal-clear monitor. Or this version. She's a hot babe but I wonder if Mr. Flu would agree that her voice is just a little perfunctory? But who am I to criticize? My older brother had a version of this on a 45 with some chick singing it. I'm trying to remember if it was this one (Dusty Springfield).
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Sept 11, 2023 15:46:00 GMT -8
Post by Brad Nelson on Sept 11, 2023 15:46:00 GMT -8
Here are a couple discussions of A.I. The first will be digestible enough because it's presented by Jordan Peterson:
This second video features a person I know nothing about. But he seems to be involved in A.I. research. And his shtick is that he's saying there are a small number of people pushing hard and fast to make a quick buck, and they don't care squat if it all blows up in our faces.
If you watch this second video, just start at the eight minute mark. This is a rambling interview. The interviewee can get rather animated. But here and there he makes some intriguing points.
Let me summarize a few points from what I've watched so far:
1) ChatGPT, and other LLM's (language learning models) are essentially black boxes beyond the first layer or two. That is to say, by their own admission, nobody really understands what exactly is going on inside.
2) ChatGPT (and other A.I.) may well give you an answer. But there is no inherent way to trust it. Is it lying? Is it in error? Peterson himself noted that a good number of citations that ChatGPT gave him were simply made up. Connor Leahy, in the second video, gets more into this and it may be reason enough to slog your way through that video as best you can.
3) There are inklings and examples right now of how abusive, corrupt, and dangerous this A.I. can be (inherently, and via the uses some put it to). And the implication is that if these issues are popping up even now on version 4 of ChatGPT, what will version 6 or 7 bring? Again, Leahy gets into some of that but the specifics, unfortunately, fall a little short in explaining it to the layman.
Leahy's mission (at least in his interviews, I'm not really sure about the mission statement of his business) is to make A.I. a positive force. And he thinks we need to halt things immediately and institute some safeguards. One of the safeguards would be giving the end user the power and ability to know where the A.I. got its answers. I'd have to go back and re-watch that. But he uses the analogy of using A.I. to draw up architecture plans. They might looks good. But how do we know there isn't some hidden flaw...a flaw that might even be intentional?
The gist is, we don't know what's going on inside that black box. Wrong information is a constant. And A.I. could very easily hand out misinformation, even if not specifically instructed to do so by its makers (which is another issue.)
Listen to the first few minutes of the Peterson interview where he shows his annoyance at the moralizing nature of ChatGPT. Wow. Just what we've run into.
This is a big subject. These videos aren't perfect. But this is the cutting edge and is a potentially yuge topic. I saw one video that said – gosh – A.I. could have the potential to harm mankind even more than "climate change." No, I'm not making that up. I laughed when I read that. But he (or she) makes a good point.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Sept 12, 2023 7:43:24 GMT -8
Post by Brad Nelson on Sept 12, 2023 7:43:24 GMT -8
That A.I. video with Peterson started well but got pretty thick. I haven't finished it. Frankly, there was a lot of intellectual masturbation going on, something I try to avoid, if only to be polite. It seemed at times somewhat a contest of esoterica. I'll try to watch more and summarize any salient points. Here's what I've gotten so far from this interview and others: 1) Because humans are very language-based, LLM (large language model) A.I., such as ChatGPT, could allow us to learn more about ourselves. 2) There is bona fide computer code running ChatGPT. But apparently much or most of what is going on in the black-box depths of the A.I. does not involve code or specific Al-Gore-rhythms. I've heard some basics on how the training of the A.I. works. It (similar to Commander Data in Star Trek: The Next Generation) can scan and somehow (in what sense, I don't know) incorporate vast volumes of raw text at amazing speed and incorporate it into some kind of knowledge base. (Note that these last nine words speak to the heart of whatever magic is going on.) I've read nothing yet on how it seems to almost independently produce (to our ears) coherent answers from what might at first glance look like too much information for anyone or anything to collate usefully or intelligently. It "weighs" certain outcomes and is "trained" to produce correct answers. But it's a black box indeed regarding getting any answers from people on how it can take so much data and make good use of it without actually being consciously intelligent in any sense we know. And that's the point. A.I. is not intelligent (so far as we know) in any sense. It's intelligent in our sense. Its inner workers are cajoled and steered (presently) to produce answers that make sense to us and produce desirable outcomes. They say that ChatGPT, for example, passed the bar exam. We're certainly not at the point where anything inside the computer understands what law is. But it has been given some kind of overall coherency of human thought and expectations so that it can produce answers we can use and pass our standards of "intelligence." If there is something other than that going on, I will report back. There are a lot of people watching to see if SkyNet will become self-aware. Many fear that. Many others are hoping for just that. 3) Billions are or can be made on this technology which is one reason that coherent and clear answers about ChatGPT (and similar technologies) are a bit hard to come by. The other factor is that the nitty-gritty details are going to be way over our head. 4) I find it interesting that "consensus" has not yet congealed so that you are considered a Luddite if you don't automatically sing the praises of A.I. I believe that will come very soon. For now, it is still "scientifically correct" to ask questions about it. 5) It's unarguably a useful technology right now. If it is useful now is rough form, one can imagine how useful it will be in future versions. And, like any tool, it can and will be used for bad purposes. The more powerful it gets, the more it can potentially amplify malevolent motives. 6) Peterson noted that, as far as he is concerned, ChatGPT had passed the Turing Test. He said there was some recent psychological study where the patients were treated (presumably behind a blind) by a real person or by ChatGPT. Apparently the patients in the study preferred ChatGPT as their shrink. 7) This last point is my own. I find it interesting that we here are basically indistinguishable from ChatGPT-like characters. That does not mean I think we can be reduced to simple Al-Gore-Rhythms. I mean the exact opposite. You can have reasonably intelligent conversation with a ChatBot that are nearly impossible to find "out there." Go on Facebook and see if you can engage honestly and deeply with any even halfway controversial topic without people wigging out. Of course, Peterson noted the moralizing nature of ChatGPT as well, so I guess nothing has changed. And I've also noted how its nearly impossible to get anything but boilerplate baloney if you engage it on any social, political, or moral topic. But what also occurs to me is that friends sitting down at a keyboard and conversing in intelligent dialogue is very ChatGPT-like and not very Facebook-like. That is, we might find ourselves having more in common with an artificial intelligence agent than we do with the rank-and-file of humanity. But then, it's no secret why many people prefer pets.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
Post by Brad Nelson on Sept 12, 2023 8:06:21 GMT -8
Fast forward to about the 46 minute mark of the Peterson video. They're talking about creating scenarios (and I don't understand what "app" was made) where you could engage the A.I. in a conversation about the bible. You might also incorporate Dante, Augustine, and all the great religious works and basically have a chat with about it all. An intriguing idea. You wonder if anything useful could come from it. It's still murder just getting a Catholic to explain the Trinity,
You also wonder just how much of this awe of A.I. isn't the usual act of being bamboozled by intellectualism, for without a conscious intelligence behind this stuff, aren't we just reacting to basically what we want to hear? Food for thought as we look at what A.I. is and is not. I mean, what really is it going to tell us about the bible that is interesting and unique that is not simply a product of what others have already said? And if it can synthesize something new, then who or what is doing the synthesis?
|
|
|
Post by kungfuzu on Sept 12, 2023 9:17:25 GMT -8
The nature of the Trinity, particularly the nature of Christ and the "procession" of the Holy Spirit, has always been the biggest point of disagreement amongst Christians. Numerous heresies have arisen from these discussions.
Just another way to propagandize the prols. The "elites" will control AI thus they will program in what it will say on any given subject. People being the lazy slugs that they are, will go to places like ChatGPT in order to avoid the time, effort and thinking required to study that subject. Two of the "elites" goals are thus achieved. 1) Make the people lazier and less able to think for themselves. 2) Limit and control the message which the "elites" want to spread.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
Post by Brad Nelson on Sept 12, 2023 9:40:26 GMT -8
A basic premise to consider regarding this general topic: Can there be knowledge without understanding? There can certainly be data. But given that this data is (so far as I know) sifted and weighted via human-purposed Al-Gore-rhythms, I'm not really sure what we have at the moment. Is it ultimately just a better search engine?
On second thought, don’t watch that Peterson video. Both are giddy, in a Utopian way, about the possibilities of ChatGPT, specifically, LLM's (large language model A.I.). It will be able to fine-tune and customize our learning. Kids in the third world will be able to advance rapidly with only one hour of directed (by A.I.) education each day. Lofty dreams, indeed.
And I don't doubt that there is utility in that aspect. But, geez, they were saying the same things upon the creation of the personal computer and, later, the internet. "We will become educated, enlightened beings." But a quick Google says that 30% of internet content is porn. We have Shakespeare and all the great works at our fingertips already. But are we more or less vulgar and stupid than before computers? Who even remembers a good quote from Shakespeare? Do Facebook or any of the online "social" places resemble the lobby of the Library of Alexandria? We might have more data via A.I. presented to us in easier-to-access ways than ever before. But we are already flooded with easily-accessed data, and yet most people think CO2 is a poison.
Granted, there is that subset at the top – the people who create this stuff and would be our masters...the people you mentioned – who are not the rabble. There are legions of very smart people contributing to this A.I. stuff. But I can guarantee you that very few of these young men (and most of them will certainly be young men) are morally grounded in anything but money, power, prestige, tattoos, or just the desire to manipulate others – or perhaps to find "God" in the promised immortality of living as a digital being. Honor, grace, objectivity, reflection, humility, forbearance...I really don't expect the A.I. that they are producing to contain any of this. It may, rather, be "artificial materialist secularism" that is at the heart of this.
But Peterson and his interviewee think that it will be unleashed less for data/information than for creativity. Good luck with that.
|
|
Brad Nelson
Administrator
עַבְדְּךָ֔ אֶת־ הַתְּשׁוּעָ֥ה הַגְּדֹלָ֖ה הַזֹּ֑את
Posts: 12,261
|
A.I.
Sept 12, 2023 13:34:05 GMT -8
Post by Brad Nelson on Sept 12, 2023 13:34:05 GMT -8
You man not want to watch this:
|
|