???

Mostly confusion and queue | 20s

 

anoonzee:

shutup-rachel:

Absolutely losing it at this Reddit post

image
image
image

And the update

image
image
image

She buttered Jorts

The outrage summed in a perfect Tweet:

image

citizen-zero:

scarlet00rose:

luminarai:

listen, I’m not the biggest fan of kids but if a child looks at me then you bet I’m gonna smile back at them. kids deserve to experience the world as a kind and safe place to explore okay.

But the world is not kind or safe.

then Fucking do your part to make it that way.

fullmetalfisting:

I love when you see someone reblogging a text post multiple times because you don’t know if tumblr glitched on their end or if the post, “who else up garging they goyle” really fucking resonated with them and they just had to rb that mf 4x

kedreeva:

blanket-fish:

funnyinstagramvideos:

ig-braving:

ig-braving:

imagine trying to learn english and hear this

the worst part is that this makes perfect sense to me

There is no one in earth stronger than people who learn English as a second language. I bow to you

In case this is confusing, here’s some more explanation.

Yeah = yeah

No = No

No, Yeah = Yeah (the no dismisses uncertainty about the yeah)

Yeah, no = No (the yeah confirms the no)

Yeah- no, for sure = yeah (yeah is the answer, with “no, for sure” as emphasis under the same rule as “no, yeah” where the no dismisses uncertainty about the for sure. Generally this means the speaker actually agrees with the asker)

Yeah, yeah, yeah = no (because either the speaker is not listening/paying attention, does not believe the asker, or they are dismissing the asker, often because they have something else to say/suggest; additionally this one is usually used with familiarity, not strangers, and often accompanies annoyance of some sort)

Yeah- no, yeah = Yeah (same terms as “yeah- no, for sure” but less enthusiastic or less in agreement with the asker, ie: willing to do it but may not want to do it where “it” may be anything from agreeing with the asker to doing a task)

No- yeah, no = No (no is the answer, with “yeah, no” as emphasis under the same rule as “yeah, no”)

ophilosoraptoro:

brosef:

5bi5:

ramblingcj:

weaver-z:

weaver-z:

Prison guards: Iroh? Escape? Ha! That weak, senile old man couldn’t escape if we rolled a red carpet to the door!

Iroh alone in his cell:

image
image

I saw the video and thought “that guy looks like Jack Black”, then I scrolled down to read that. Yup, sure was Jack Black. Also yes, the above is actually true, his mother Judith Love Cohen did indeed help create the abort-guidance system that rescued the Apollo 13 astronauts.


image

Wait does this mean people are unfamiliar with this iconic post

image

Oh hey, I’m in a screenshot.

That’s appropriate, since Jack Black himself is like a chair to the back.

hatingongodot:

hatingongodot:

Years on the internet and somehow i still click on comments sections with the insanely optimistic idea that I’ll learn something new instead of being subjected to the dumbest motherfuckers online typing like their sole purpose in life is to make me want to end mine

“Wow, what an interesting post! I want to see what sort of fascinating discourse is being generated by the idea posited by the original poster” <- Me, operating under levels of delusion yet unexplained by modern science

becausegoodheroesdeservekidneys:

shortace:

ocean-again:

shortace:

Just on a whim, because I know that Alcibiades is one of the weirdest and funniest characters in ancient Greek history, I asked ChatGPT “What’s the weirdest thing Alcibiades ever did?”

ChatGPT came back with the details of something Alcibiades (henceforth referred to as ‘Alci’ so I don’t have to keep typing it out) was accused of, but acquitted of.

When I pointed out that he had been acquitted and may not have actually done this thing, Chat GPT apologised and said, “yes, he was acquitted”, and then went on to tell me that, nonetheless, the event was significant because it made Alci flee the city.

Alci did not flee the city, he was sent away on a military expedition, which was exactly what he’d wanted and asked for. When I pointed that out, ChatGPT apologised again for being wrong.

I asked again for weird things he might actually have done, and was told one version of a story I’ve heard before about how Alci stole some stuff from a friend. ChatGPT’s version was different from what I’d heard, though, so I mentioned that, and only then did ChatGPT acknowledge that there were different versions of the story. As part of its apology and correction, ChatGPT said that it did not always have access to all information - but then proceeded to provide details of the version of the story I’d heard before, showing that it did, in fact, have access to that information.

I asked again, what is the weirdest thing Alcibiades ever did? ChatGPT gave me an answer, which was a story I’d never heard before, so I asked for a source. ChatGPT told me it was in Plutarch’s Lives, and I presumed it was in his Life of Alcibiades, so that’s where I looked. When I said I couldn’t find it there, ChatGPT told me, sorry for not being specific, it was actually in Plutarch’s Life of Nicias. So I went and read Plutarch’s Life of Nicias and couldn’t find it.

So I told ChatGPT that I couldn’t find the story in that book, could it please be more specific? What I was hoping for was a chapter or page number or something, I just presumed I’d missed it.

ChatGPT came back with “no, actually it’s not in that book, it may be a later invention, there is no concrete evidence for this story.”

TL;DR: ChatGPT cannot be trusted. Even when it does give you a source, it can be wrong. It has no capacity to evaluate the accuracy or likely accuracy of the information it gives you. It will present you with wrong or debatable information and give you absolutely no indication that it may not be correct, or that other versions or interpretations are possible.

gotta remember that chat GPT works basically the same way autocomplete works, but it can autocomplete longer runs of reasonably coherent text.

it’s not looking up facts, its both trying to say the thing that’s most likely to come next in the text it was trained on, and also trying to not perfectly replicate the training text, because it’s supposed to be a bit creative.

what this means is that it’s actually primed to lie to you. you can feed it nothing but perfectly factual text and it will spit back lies because the truth replicates the training set too closely.

it’s not really capable of answering a question the way a person might.

what it does is generate text that reasonably seems like what an answer to that question might look like.

it’s a bullshit generator.

it is made to bullshit tech investors. (who exclusively talk by making up things that sound correct without regard for the actual truth) so, if you’re smarter than a venture capitalist then don’t fall for the bullshit meant to ensnare venture capitalists.

That’s a really good way to put it!

Reblogging this again because I am about to have to explain this to my boss and the head of academic quality today

wizard-email:

i hate you bluetooth i hate you single port phones i hate you battery low please change i hate you automated computer voice i hate you minimalist design