"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype.
Proud supporter of working people. And proud booer of SXSW 2024.
Meta has not announced the new bot, dubbed Meta External Agent, beyond updating an existing web page for developers.
Link Actions
Meta has quietly unleashed a new web crawler to scour the internet and collect data en masse to feed its AI model.
The crawler, named the Meta External Agent, was launched last month, according to three firms that track web scrapers and bots across the web. The automated bot essentially copies, or “scrapes,” all the data that is publicly displayed on websites, for example the text in news articles or the conversations in online discussion groups.
A representative of Dark Visitors, which offers a tool for website owners to automatically block all known scraper bots, said Meta External Agent is analogous to OpenAI’s GPTBot, which scrapes the web for AI training data. Two other entities involved in tracking web scrapers confirmed the bot’s existence and its use for gathering AI training data.
While close to 25% of the world’s most popular websites now block GPTBot, only 2% are blocking Meta’s new bot, data from Dark Visitors shows.
Earlier this year, Mark Zuckerberg, Meta’s cofounder
A study published in the Journal of Hospitality of Marketing & Management finds that consumers are very much turned off by products that say they are “AI-powered”
With the next generation of AI photo editing tools built into the Google’s flagship Pixel 9 family, our basic assumptions about photographs capturing a reality we can believe in are about to be seriously tested — and @theverge shows us why.
“An explosion from the side of an old brick building. A crashed bicycle in a city intersection. A cockroach in a box of takeout. It took less than 10 seconds to create each of these images with the Reimagine tool in the Pixel 9’s Magic Editor. They are crisp. They are in full color. They are high-fidelity. There is no suspicious background blur, no tell-tale sixth finger. These photographs are extraordinarily convincing, and they are all extremely f---ing fake.” Take a look at the pictures for yourself as The Verge ponders the implications of these new capabilities.
Lionsgate has dropped the marketing consultant who included AI-generated quotes from critics in the latest "Megalopolis" trailer.
Link Actions
Lionsgate has parted ways with Eddie Egan, the marketing consultant who came up with the “Megalopolis” trailer that included fake quotes from famous film critics.
The studio pulled the trailer on Wednesday, after it was pointed out that the quotes trashing Francis Ford Coppola’s previous work did not actually appear in the critics’ reviews, and were in fact made up.
Matt Garman sees a shift in software development as AI automates coding, telling staff to enhance product-management skills to stay competitive.
Link Actions
Software engineers may have to develop other skills soon as artificial intelligence takes over many coding tasks.
That's according to Amazon Web Services' CEO, Matt Garman, who shared his thoughts on the topic during an internal fireside chat held in June, according to a recording of the meeting obtained by Business Insider.
"If you go forward 24 months from now, or some amount of time — I can't exactly predict where it is — it's possible that most developers are not coding," said Garman, who became AWS's CEO in June.
"Coding is just kind of like the language that we talk to computers. It's not necessarily the skill in and of itself," the executive said. "The skill in and of itself is like, how do I innovate? How do I go build something that's interesting for my end users to use?"
This means the job of a software developer will change, Garman said.
"It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going
There are few miniature painting contents as prestigious as the Golden Demon, Games Workshop’s showcase for the artistry and talent in the Warhammer hobby. After the March 2024 Golden Demon was marred by controversy around AI content in a gold-medal winning entry, GW has revised its guidelines, and any kind of AI assistance is out.
The Warhammer 40k single miniature category at the Adepticon 2024 Golden Demon was won by Neil Hollis, who submitted a custom, dinosaur-riding Aeldari Exodite (a fringe Warhammer 40k faction that has long been part of the lore but never received models). The model’s base included a backdrop image which, it emerged, had been generated using AI software.
Online discussions soon turned sour as fans quarrelled over the eligibility of the model, the relevance of a backdrop in a competition about painting miniatures, the ethics of AI-generated media, and Hollis’ responses to criticism.
I'm currently trying to exit Gmail with all my emails if possible. However many comments are about why I shouldn't host my own server. So it got me thinking that there should be a new kind of email system not based on all the previous crud from the before times that we still use today.
And indeed, it looks like AI will be the driving force that ends email just like spam did the telephone. Sure the telephone is still around but no one uses teleconferencing anymore for example. We use teams and zoom and such other shitty pay services. So the pool is prime to reinvent email. The users may not see a big difference maybe, but the tech behind it may hopefully be simplified and decentralized as it was meant to be.
Many Procreate users can breathe a sigh of relief now that the popular iPad illustration app has taken a definitive stance against generative AI. "We're not going to be introducing any generative AI into our products," Procreate CEO James Cuda said in a video posted to X. "I don't like what's happening to the industry, and I don't like what it's doing to artists."
The creative community's ire toward generative AI is driven by two main concerns: that AI models have been trained on their content without consent or compensation, and that widespread adoption of the technology will greatly reduce employment opportunities. Those concerns have driven some digital illustrators to seek out alternative solutions to apps that integrate generative AI tools, such as Adobe Photoshop. "Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future," Procreate said on the new AI section of its website. "We think machine le
‘There’s a new intelligence in town’ as Victor Miller and his ChatGPT bot, Vic, plan to lead Cheyenne in a hybrid format
Link Actions
Voters in Wyoming’s capital city on Tuesday are faced with deciding whether to elect a mayoral candidate who has proposed to let an artificial intelligence bot run the local government.
Earlier this year, the candidate in question – Victor Miller – filed for him and his customized ChatGPT bot, named Vic (Virtual Integrated Citizen), to run for mayor of Cheyenne, Wyoming. He has vowed to helm the city’s business with the AI bot if he wins.
Miller has said that the bot is capable of processing vast amounts of data and making unbiased decisions.
I recieved a comment from someone telling me that one of my posts had bad definitions, and he was right. Despite the massive problems caused by AI, it's important to specify what an AI does, how it is used, for what reason, and what type of people use it. I suppose judges might already be doing this, but regardless, an AI used by one dude for personal entertainment is different than a program used by a megacorporation to replace human workers, and must be judged differently. Here, then, are some specifications. If these are still too vague, please help with them.
a. What does the AI do?
It takes in a dataset of images, specified by a prompt, and compiles them into a single image thru programming (like StaDiff, Dall-E, &c);
It takes in a dataset of text, specified by a prompt, and compiles that into a single string of text (like ChatGPT, Gemini, &c);
It takes in a dataset of sound samples, specified by a prompt, and compiles that into a single sound (like AIVA, MuseNet, &c).
Artists prepare to take on AI image generators as copyright suit proceeds.
Link Actions
Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists. In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.
"We won BIG," an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. "Not only do we proceed on our copyright claims," but "this order also means companies who utilize" Stable Diffusion models and LAION-like datasets that scrape artists' works for AI training without permission "could now be liable for
Earlier this year I got fired and replaced by a robot. And the managers who made the decision didn't tell me – or anyone else affected by the change – that it was happening.
The gig I lost started as a happy and profitable relationship with Cosmos Magazine – Australia's rough analog of New Scientist. I wrote occasional features and a column that appeared every three weeks in the online edition.
It didn't. In February – just days after I'd submitted a column – I and all other freelancers for Cosmos received an email informing us that no more submissions would be accepted.
It's a rare business that can profitably serve both science and the public, and Cosmos was no exception: I understand it was kept afloat with financial assistance. When that funding ended, Cosmos ran into trouble.
Accepting the economic realities of our time, I mourned the loss of a great outlet for my more scientific investigations, and moved on.
Eric Schmidt, ex-CEO and executive chairman at Google, said his former company is losing the AI race and remote work is to blame. From a report:
"Google decided that work-life balance and going home early and working from home was more important than winning," Schmidt said at a talk at Stanford University. "The reason startups work is because the people work like hell." Schmidt made the comments earlier at a wide-ranging discussion at Stanford. His remarks about Google's remote-work policies were in response to a question about Google competing with OpenAI.
The closest Big Tech has come to a breakup was the Microsoft case a quarter-century ago.
Link Actions
Just yesterday, Google held a splashy event to show off its latest lineup of hardware products, including Google Pixel smartphones. As the event made clear, these devices, as well as the broader ecosystem of third-party Android hardware products, are the most important vehicle for Google’s AI ambitions—without Android, Google has no obvious way to ensure that billions of people get to interact with its Gemini-powered chatbots and other AI services on a daily basis. (Indeed, one can imagine Google’s leverage of Android to promote Gemini being the kind of issue that could inspire a future antitrust suit in the U.S. or elsewhere.)
As we know, Google is pushing AI features into Android and of course Google's AI learns everything from its users. And as users are becoming more dependent on Google and it could control the lives of billions of people in the future.
And there's no privacy, since our data is Google's gold mine and they will dig it up as much as they can.