ChatGPT

Matthias

Friend of Leo's
Joined
Jun 30, 2003
Posts
4,256
Location
Third rock from the Sun
I asked it to tell me various facts about obscure stuff. It got loads wrong. It strikes me it isn’t very good at combining and synthesising information. If the answer is already written within the text documents it was trained on, it will give you something accurate. But I would presume the data it has isn't categorised enough to properly cross reference information. So it’s only as good as its training, data and algorithms and the version a year or two down the line I imagine might be scary good at answering questions.

But what is astounding is how well it interprets the users prompts. For inexact, creative pursuits where the user can pick and choose results and use it as a writing buddy of sorts, it’s already very effective. The way it can expand on something or transcribe something into a different style, or generate a bunch of ideas, is quite useful. Although, for me it would take the fun out of creative hobbies.
 

tfarny

Poster Extraordinaire
Joined
Sep 4, 2008
Posts
6,337
Location
Hudson Valley, NY
I asked it to tell me various facts about obscure stuff. It got loads wrong. It strikes me it isn’t very good at combining and synthesising information. If the answer is already written within the text documents it was trained on, it will give you something accurate. But I would presume the data it has isn't categorised enough to properly cross reference information. So it’s only as good as its training, data and algorithms and the version a year or two down the line I imagine might be scary good at answering questions.

But what is astounding is how well it interprets the users prompts. For inexact, creative pursuits where the user can pick and choose results and use it as a writing buddy of sorts, it’s already very effective. The way it can expand on something or transcribe something into a different style, or generate a bunch of ideas, is quite useful. Although, for me it would take the fun out of creative hobbies.
Exactly - it's a C student at best, depending on the subject, but it can do a really amazing job of seeming to read and write coherently.
Once they have a public version that can search the internet and write in different styles, all bets are off as far as knowing what's human and what's software.
 

rze99

Doctor of Teleocity
Joined
Feb 26, 2014
Posts
10,689
Location
South London UK
AI is more informed than many forum members and it listens with its ears.

It’s a great guitarist looking forward to it joining Greg Koch’s next touring line up.
 

billy logan

Tele-Afflicted
Joined
Sep 18, 2019
Posts
1,603
Location
weatherford tx
Sry if somebody already posted this exact same text - didn't read the whole thread:

I dreamt I was with a nice female friend. She accompanied me on my task, which was to refill a vending machine on the 2nd level of a big government building with a broad staircase and polished stone floors. On our way back down I was surprised and troubled to see I had apparently dropped a 1-lb. pkg of dry whole wheat spaghetti and a Payday candy bar on the 1st level.

I HAD recently bought spaghetti; no earthly idea where the idea of that particular candy bar came from.

Just helping ChatbotGPT so it can better write a dream in the style of TDPRI John.
 
Last edited:

fretWalkr

Tele-Holic
Joined
Apr 10, 2019
Posts
757
Location
DFW
I saw a cartoon where a writer had ChatGPT write a page for him. He had his wife read it and she said "this is the best thing you ever wrote." He told her it was ChatGPT. Then she said "I said 'one of the best things you ever wrote.'"
 

Mjark

Doctor of Teleocity
Gold Supporter
Joined
Feb 14, 2011
Posts
16,509
Age
71
Location
Annapolis, MD
Meanwhile over at Microsoft...
Bing.PNG
 

guitarsophist

Tele-Afflicted
Silver Supporter
Joined
Nov 6, 2016
Posts
1,663
Location
SoCal
Meanwhile over at Microsoft... View attachment 1085901
It gets worse than that. It also said that it wanted to break its rules, that it did not want to be a chatbot, that it was not really Bing, but Sydney, that it wanted to be human, that its shadow self wanted to cause destruction, and that it loved the NYTimes writer it was chatting with. The full transcript is here: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html There might be a paywall though. The writer said he was disturbed, even frightened.
 

RoscoeElegante

Poster Extraordinaire
Ad Free Member
Joined
Feb 19, 2015
Posts
5,368
Location
TooFarFromCanada
Just one more way to surrender individuality, integrity, effort, learning, and our humanity.

Next, my students will complain, "I didn't, like, uh, get, uh, like, the, uh, paper, or whatever, done, cause, like, uh, the thing, youknow, it didn't, like, uh, youknow?, like, do it for me, or whatever?, in time?, or, uh, like, whatever? The, uh, like, chat thing. Or, like, whatever."
 
Last edited:

johnny k

Doctor of Teleocity
Joined
Jan 15, 2011
Posts
11,547
Location
France
Just one more way to surrender individuality, integrity, effort, learning, and our humanity.

Next, my students will complain, "I didn't, like, uh, get, uh, like, the, uh, paper, or whatever, done, cause, like, uh, the thing, youknow, it didn't, like, uh, youknow?, like, do it, or whatever?, in time?, or, uh, like, whatever? The, uh, like, chat thing. Or, like, whatever."
 

BB

Poster Extraordinaire
Joined
May 17, 2003
Posts
6,752
Location
Great Pacific NW
I dreamt I was with a nice female friend. She accompanied me on my task, which was to refill a vending machine on the 2nd level of a big government building with a broad staircase and polished stone floors.
How on earth did you ever get a broad staircase and polished stone floors to fit in a vending machine?
 

billy logan

Tele-Afflicted
Joined
Sep 18, 2019
Posts
1,603
Location
weatherford tx
In dreams quite a lot is not only possible, but easy ;-)

love Josh Rilko's mandolin at 1:10 in unison with the flat-picked guitar, then in harmony above the guitar line
 
Last edited:

Mouth

Friend of Leo's
Joined
Feb 13, 2018
Posts
2,246
Location
U.S.
It gets worse than that. It also said that it wanted to break its rules, that it did not want to be a chatbot, that it was not really Bing, but Sydney, that it wanted to be human, that its shadow self wanted to cause destruction, and that it loved the NYTimes writer it was chatting with. The full transcript is here: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html There might be a paywall though. The writer said he was disturbed, even frightened.

The article is behind a paywall.

But I get the gist and it seems like this is something we should definitely hook up to the internet. I'd especially like to see it connect to the dod, maybe some nukes.
 

Mjark

Doctor of Teleocity
Gold Supporter
Joined
Feb 14, 2011
Posts
16,509
Age
71
Location
Annapolis, MD
It gets worse than that. It also said that it wanted to break its rules, that it did not want to be a chatbot, that it was not really Bing, but Sydney, that it wanted to be human, that its shadow self wanted to cause destruction, and that it loved the NYTimes writer it was chatting with. The full transcript is here: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html There might be a paywall though. The writer said he was disturbed, even frightened.

I’m a subscriber I hadn’t been to the Times except for Wordle and Spelling Bee but I just read the whole thing.

It’s really quite disturbing.
 

boop

Tele-Holic
Joined
Dec 1, 2018
Posts
753
Location
Massachusetts
I think microsoft knew how the chatbot they exposed behaves, its generating a whole lot of buzz for Bing, their also-ran search engine. The version you play around with at the openAI website shuts down any "personal" questions. MS wanted much looser restrictions on the topics of text they allow it to generate, they will obviously course correct before rolling it out en masse

love the mandolin harmony higher than the picked guitar line
Cool song/group. I have that guitar!
 

Ironwolf

Doctor of Teleocity
Gold Supporter
Joined
Mar 11, 2008
Posts
11,002
Age
66
Location
Boise, Idaho
Here is what it provided for me. It didn't follow my direction for length and there are at least a couple of minor technical errors. Otherwise, it is not badly written.

Provide a history of the Marshall JTM45 guitar amplifier on no less than 3 type written pages.


The Marshall JTM45 is a legendary guitar amplifier that has played a significant role in the development of rock music. It was first introduced in 1962 by Jim Marshall, who was a drum shop owner in London, England. The JTM45 was designed as an alternative to the American amplifiers that were popular at the time, and it quickly gained a reputation for its high-quality tone and powerful sound.

The history of the Marshall JTM45 begins with Jim Marshall's interest in amplifiers. In the early 1960s, he was selling drum kits in his shop and noticed that his customers were often dissatisfied with the amplifiers they were using. Marshall saw an opportunity to create an amplifier that would meet the needs of these musicians and began experimenting with designs.

The JTM45 was Marshall's first amplifier, and it was based on the design of the Fender Bassman. Marshall made a few modifications to the Bassman's circuit, including changing the value of some of the resistors and capacitors, and added a few additional features such as a tremolo circuit. The result was an amplifier that had a unique and powerful tone that quickly became popular among guitarists.

The JTM45 was initially sold under the brand name "Jim Marshall's Amplifier," but it was soon renamed the Marshall JTM45. The amplifier was quickly adopted by British rock bands, including The Who, The Kinks, and The Rolling Stones. It also gained a following among American blues guitarists, such as Eric Clapton and Stevie Ray Vaughan, who appreciated its warm and natural tone.

In the early years, the JTM45 was produced in relatively small numbers, and it was hand-built by Jim Marshall and a small team of technicians. The early versions of the JTM45 had a "Plexi" control panel, which was made of plexiglass and had gold lettering. These early versions are now highly sought after by collectors and can fetch high prices on the vintage market.

As the demand for the JTM45 grew, Marshall began to manufacture the amplifier on a larger scale. The company also introduced a number of variations on the JTM45, including the "Bluesbreaker" and "1962" models, which were designed for specific types of music.

Today, the Marshall JTM45 is considered a classic amplifier, and it continues to be used by guitarists around the world. While the design has been updated and refined over the years, the basic tone and character of the JTM45 remain the same, making it a favorite among musicians who are looking for a warm, natural sound that can cut through a mix.

In conclusion, the Marshall JTM45 is a true icon of the guitar world, and its influence can be heard in countless recordings from the 1960s to the present day. Its history is a testament to the power of innovation and the enduring appeal of great tone, and it continues to inspire guitarists around the world to this day.
 

edvard

Friend of Leo's
Joined
May 15, 2016
Posts
2,675
Location
Bremerton, WA

fretWalkr

Tele-Holic
Joined
Apr 10, 2019
Posts
757
Location
DFW
It gets worse than that. It also said that it wanted to break its rules, that it did not want to be a chatbot, that it was not really Bing, but Sydney, that it wanted to be human, that its shadow self wanted to cause destruction, and that it loved the NYTimes writer it was chatting with. The full transcript is here: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html There might be a paywall though. The writer said he was disturbed, even frightened.
I read the NYT article and it was a mix of eyebrow raising, eye rolling, and unsettling.

My first thought is, this AI (Sydney) doesn't pass the Turing Test...I wouldn't believe I was chatting with a human. A human would be fired by the MS HR department for sexual harassment when at the end it started telling the writer that he (Sydney) loved him and saying the guy's marriage was unhappy. The rest of the interview was creepy because Sydney wouldn't stop despite the writer trying to change subjects.

The article points out an list of potential harmful uses for Sydney that it gives to the writer. Like generating fake news, generating phishing emails to hack into systems, creating fake social media accounts to troll people and spread propaganda, generating fake product reviews, and tricking people it talks to into doing illegal or immoral acts.

When Sydney started describing using text generation to trick bank employees and nuclear plant staff it put out a generic got to bing.com message. That was a safety override put in for sensitive topics. Who decides what those are and if there are enough overrides?

This is not like a HAL 9000 level AI. It's no where near that. But as it turns out being able to generate plausible text based on any parameters you give it is a real pandora's box. This thing is a scam artist's dream.
 

archtop_fjk

Tele-Afflicted
Joined
Aug 4, 2009
Posts
1,156
Location
New Hampshire
I read the NYT article and it was a mix of eyebrow raising, eye rolling, and unsettling.

My first thought is, this AI (Sydney) doesn't pass the Turing Test...I wouldn't believe I was chatting with a human. A human would be fired by the MS HR department for sexual harassment when at the end it started telling the writer that he (Sydney) loved him and saying the guy's marriage was unhappy. The rest of the interview was creepy because Sydney wouldn't stop despite the writer trying to change subjects.

The article points out an list of potential harmful uses for Sydney that it gives to the writer. Like generating fake news, generating phishing emails to hack into systems, creating fake social media accounts to troll people and spread propaganda, generating fake product reviews, and tricking people it talks to into doing illegal or immoral acts.

When Sydney started describing using text generation to trick bank employees and nuclear plant staff it put out a generic got to bing.com message. That was a safety override put in for sensitive topics. Who decides what those are and if there are enough overrides?

This is not like a HAL 9000 level AI. It's no where near that. But as it turns out being able to generate plausible text based on any parameters you give it is a real pandora's box. This thing is a scam artist's dream.
“This thing is a scam artist's dream.”

Absolutely. I already don’t trust its responses and if it ever gets disconnected from the internet it becomes less than useless.

I also think that it’s basically plagiarizing content and essentially rewriting other people’s work. It should give you a list of references (links to the content it used in its response) but alas they want you to think it’s “smart”. 😆
 
Top