It’s often not a choice between an AI-generated summary and a human-generated one, though. It’s a choice between an AI-generated summary and no summary.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
It’s often not a choice between an AI-generated summary and a human-generated one, though. It’s a choice between an AI-generated summary and no summary.
Not in every way. They’re cheaper and faster.
If you simply don’t want to engage in a discussion with him, then that’s fine, you should let him know that you’re not interested in talking about it. You don’t have to justify your choices to him, if you want to use a particular browser then that’s fine and if he spontaneously decides he needs to “talk you out of it” then that’s a dick move. Tell him that you don’t want to debate the subject and it’s no skin off of his nose so he shouldn’t try to engage you in one.
But if you’re asking “how can I convince him that he’s wrong”, well that is engaging in the debate. And if you’re going to engage in a debate you should try to be as open about it as you’d like your debate opponent to be in turn. Have you considered that perhaps he has some valid points and is not taking that position just to be contrarian?
Personally, I find that it’s pretty much impossible to talk someone with a strongly-held position out of that position. The value of Internet debates with people like that is that lots of spectators who don’t have such strongly-held positions may be watching, but when it’s a one-on-one situation it’s likely to be a futile and frustrating effort with no benefit. So I would advise going with the “don’t bother engaging” route. But of course, if you feel strongly that you want to engage, I can’t change your mind on that and won’t try. It’s your time to spend.
Some people are so addicted to anger that they’ll shoot themselves in the foot just so they’ll have something to complain about.
“The gimp” is a character from Pulp Fiction. You’re imagining things and refusing to use a powerful tool in response to that imagined slight.
That’s not what they’re arguing, not even close.
And unfortunately, this article is also just a response to media clickbait, not a discussion point it tries to look like
And becomes new clickbait in the process.
Looking forward to the “Waymo robotaxis become silent killers stalking the night” headlines once the fix is implemented.
I run tabletop roleplaying adventures and LLMs have proven to be great “brainstorming buddies” when planning them out. I bounce ideas back and forth, flesh them out collaboratively, and have the LLM speak “in character” to give me ideas for what the NPCs would do.
They’re not quite up to running the adventure themselves yet, but it’s an awesome support tool.
It’s impossible to run an AI company “ethically” because “ethics” are such a wibbly-wobbly and subjective thing, and because there are people who simply wish to use it as a weapon on one side of a debate or the other. I’ve seen goalposts shift around quite a lot in arguments over “ethical” AI.
not some fucking investors and shareholders that probably kept pressuring CS for the last several years to reduce costs and increase revenue,
This is presumably part of what would be at issue in court. The shareholders are claiming they were lied to. We’ll see how that holds up.
CrowdStrike (CRWD.O), has been sued by shareholders who said the cybersecurity company defrauded them by concealing how its inadequate software testing could cause the July 19 global outage that crashed more than 8 million computers.
In a proposed class action filed on Tuesday night in the Austin, Texas federal court, shareholders said they learned that CrowdStrike’s assurances about its technology were materially false and misleading when a flawed software update disrupted airlines, banks, hospitals and emergency lines around the world.
Basically, the company advertised itself as being one way to the shareholders, they bought in on that basis, and then it turned out they were misrepresenting themselves. Presumably they’re suing the company and not the executives personally because that’s where the money is.
Note that simply owning the shares doesn’t mean that it’s already “their money.” If I buy a share in a company I can’t walk up to it and demand that they give me a portion of the cash from the register. It’s more complicated than that and lawsuits like this are part of that complexity.
That would depend entirely on why OpenAI might go under. The linked article is very sparse on details, but it says:
These expenses alone stack miles ahead of its rivals’ expenditure predictions for 2024.
Which suggests this is likely an OpenAI problem and not an AI in general problem. If OpenAI goes under the rest of the market may actually surge as they devour OpenAI’s abandoned market share.
I’m saying they can do it. If you don’t have a sample then you can’t do it and the question of “rights” is entirely moot.
If you do have a sample, then questions of rights and enforcement and whatnot can be addressed. “What jurisdiction are you in?” Is an important first question for that. But if you don’t have a sample then we never get to that step.
Do you have any samples of his voice?
AI engineers are not a unitary group with opinions all aligned. Some of them really like money too. Or just want to build something that changes the world.
I don’t know of a specific “when” where a bunch of engineers left OpenAI all at once. I’ve just seen a lot of articles over the past year with some variation of “<company> is a startup founded by former OpenAI engineers.” There might have been a surge when Altman was briefly ousted, but that was brief enough that I wouldn’t expect a visible spike on the graph.
We are talking specifically about OpenAI, though.
Well, my point is that it’s already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.
I wouldn’t be surprised if “safety alignment” is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that’s likely to lock away the things they build if they turn out to be too neat.
OpenAI is no longer the cutting edge of AI these days, IMO. It’ll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.
Things change. There was a period before this information was easily available; this repository only goes back to 2013. Now there’s a period after this information, too. Things start and eventually they end.
Here’s hoping that some neat new things start up in its place.