
Discover an AI that truly understands you.

Generative artificial intelligence or generative AI is a type of artificial intelligence system capable of generating text, images, or other media in response to prompts. Generative models learn the patterns and structure of the input data, and then generate new content that is similar to the training data but with some degree of novelty.
Anyone use this AI app ?
Discover an AI that truly understands you.
OpenAI has reported on influence operations that use its AI tools. Such reporting, alongside data sharing, should become the industry norm.
Simply look out for libraries imagined by ML and make them real, with actual malicious code. No wait, don't do that
cross-posted from: https://links.hackliberty.org/post/1236693
Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI.
Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned. If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous.
According to Bar Lanyado, security researcher at Lasso Security, one of the businesses fooled by AI into incorporating the package is Alibaba, which at the time of writing still includes a pip command to download the Python package huggingface-cli in its GraphTranslator installation instructions.
There is a legit huggingface-cli, installed using pip install -U "huggingface_hub[cli]".
But the huggingface-c
Employers are willing to pay up to 44% more for AI-skilled workers in IT and 41% more for those in research and development.
Employers are willing to pay up to 44% more for AI-skilled workers in IT and 41% more for those in research and development.
The decision follows an AI-generated voice robocall mimicking President Joe Biden and telling voters not to turn out in the New Hampshire primary.
The Federal Communications Commission (FCC) revealed Thursday that it had unanimously agreed to make AI-generated voice cloning in robocalls illegal, a warning shot to scammers not to use the cutting-edge technology for fraud or to spread election-related disinformation.
The agency said that beginning Thursday it will rely on the 1991 Telephone Consumer Protection Act (TCPA) โ which bars robocalls using pre-recorded and artificial voice messages โ to target voice-cloning calls by establishing a so-called Declaratory Ruling within the TCPA.
The decision follows an AI-generated voice robocall mimicking President Joe Biden and telling voters not to turn out in the New Hampshire primary, alarming election security officials nationwide.
The New Hampshire Attorney General identified a Texas man and his company Life Corporation as the creator of the calls on Tuesday. Officials believe more than 20,000 New Hampshire residents received the calls.
โThis will make it easier for state attorney
The AI Arms Race: The High Cost of Powering the Coming Revolution
Click to view this content.
A Beijing court will have to decide if an AI-generated voice, alleged to resemble a voiceover artist and used without her approval, has infringed on her right to voice.
A Beijing court will have to decide if an AI-generated voice, alleged to resemble a voiceover artist and used without her approval, has infringed on her right to voice.
The Beijing Internet Court on Tuesday began its hearing of a lawsuit filed by the artist, whose family name is Yin, claiming the AI-powered likeness of her voice had been used in audiobooks sold online. These were works she had not given permission to be produced, according to a report by state-owned media China Daily.
Yin said the entities behind the AI-generated content were profiting off the sale proceeds from the platforms on which the audiobooks were sold. She named five companies in her suit, including the provider of the AI software, saying their practices had infringed on her right to voice.
"I've never authorized anyone to make deals using my recorded voice, let alone process it with the help of AI, or sell the AI-generated ver
With more than 1,500 tokens exposed, research highlights importance of securing supply chains in AI and ML
The API tokens of tech giants Meta, Microsoft, Google, VMware, and more have been found exposed on Hugging Face, opening them up to potential supply chain attacks.
Researchers at Lasso Security found more than 1,500 exposed API tokens on the open source data science and machine learning platform โ which allowed them to gain access to 723 organizations' accounts.
In the vast majority of cases (655), the exposed tokens had write permissions granting the ability to modify files in account repositories. A total of 77 organizations were exposed in this way, including Meta, EleutherAI, and BigScience Workshop - which run the Llama, Pythia, and Bloom projects respectively.
The three companies were contacted by The Register for comment but Meta and BigScience Workshop did not not respond at the time of publication, although all of them closed the holes shortly after being notified.
Hugging Face is akin to GitHub for AI enthusiasts and hosts a plethora of major projects. More than 250,000
Adversarial algorithms can systematically probe large language models like OpenAIโs GPT-4 for weaknesses that can make them misbehave.
When the board of OpenAI suddenly fired the companyโs CEO last month, it sparked speculation that board members were rattled by the breakneck pace of progress in artificial intelligence and the possible risks of seeking to commercialize the technology too quickly. Robust Intelligence, a startup founded in 2020 to develop ways to protect AI systems from attack, says that some existing risks need more attention.
Working with researchers from Yale University, Robust Intelligence has developed a systematic way to probe large language models (LLMs), including OpenAIโs prized GPT-4 asset, using โadversarialโ AI models to discover [โjailbreakโ prompts](https://web.archive.org/web/2023120516
We have just released a paper that allows us to extract several megabytes of ChatGPTโs training data for about two hundred dollars. (Language models, like ChatGPT, are trained on data taken from the public internet. Our attack shows that, by querying the model, we can actually extract some of the exact data it was trained on.) We estimate that it would be possible to extract ~a gigabyte of ChatGPTโs training dataset from the model by spending more money querying the model.
Unlike prior data extraction attacks weโve done, this is a production model. The key distinction here is that itโs โalignedโ to not spit out large amounts of training data. But, by developing an attack, we can do exactly this.
We have some thoughts on this. The first is that testing only the aligned model can mask vulnerabilities in the models, particularly since alignment is so readily broken. Second, this means that it is important to directly test base models. Third, we do als
You may want to watch what you discuss with Google Bard or any other AI chatbot.
cross-posted from: https://links.hackliberty.org/post/115755
If I told you that your seemingly private conversations with Bard are being indexed and appearing in Google search results, would you still use the AI chatbot? That's exactly what's happened, and Google is now scrambling to fix the issue.
The U.K. has disbanded the Centre for Data Ethics and Innovation's (CDEI) advisory board as the government switches focus to a Frontier AI Taskforce prompted in part by the rise of ChatGPT.
The British government has quietly sacked an independent advisory board of eight experts that had once been poised to hold public sector bodies to account for how they used artificial intelligence technologies and algorithms to carry out official functions.
It comes as Prime Minister Rishi Sunak drives forward with a much-publicized commitment to make the United Kingdom a world leader in AI governance, and ahead of a global AI Safety Summit being arranged for November in Bletchley Park.
Researchers found a simple way to make ChatGPT, Bard, and other chatbots misbehave, proving that AI is hard to tame.
Hack Liberty Artificial Intelligence Code Archives
Gitea (Git with a cup of tea) is a painless self-hosted Git service written in Go
Gatekeeping so that AI is controlled by powerful corporations who will impose restrictions at the behest of the government.
matrix-chatgpt-bot: Talk to ChatGPT via any Matrix client!
Talk to ChatGPT via any Matrix client! Contribute to matrixgpt/matrix-chatgpt-bot development by creating an account on GitHub.
A Matrix bot that uses waylaidwanderer/node-chatgpt-api to access the official ChatGPT API.
gpt4all: an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use. - nomic-ai/gpt4all
Demo, data, and code to train open-source assistant-style large language model based on GPT-J and LLaMa