时间:2024-01-10|浏览:263
利用人工智能制造假绑架案
本周,在一场离奇的网络绑架事件中,一名失踪的 17 岁中国交换生在犹他州冰冻荒野的一个帐篷里被发现还活着。 他被骗子操纵进入荒野,骗子声称绑架了他,向他的父母勒索了 8 万美元的赎金。
Riverdale 警方正在处理网络绑架现象。
虽然尚不清楚这起事件中是否使用了人工智能,但它揭示了假绑架事件不断增加的趋势,这些绑架事件通常针对中国交换生。 Riverdale 警察局表示,诈骗者经常通过威胁伤害家人来说服受害者孤立自己,然后使用恐惧策略和伪造的照片和音频(有时是上演的,有时是由被绑架受害者的人工智能生成的)来勒索金钱。 亚利桑那州妇女 Jennifer DeStefano去年,她向美国参议院作证称,她被 Deepfake 人工智能技术愚弄,以为自己 15 岁的女儿 Briana 被绑架了。 诈骗者显然得知这名青少年正在滑雪旅行,然后用模仿布莱恩娜抽泣和哭泣的深度伪造人工智能声音给詹妮弗打电话:妈妈,这些坏人抓住了我,帮助我,帮助我。
随后,一名男子威胁说,如果不支付赎金,就要给布里安娜注射大量毒品并杀死她。 幸运的是,在她交出任何现金之前,另一位家长提到他们听说过类似的人工智能骗局,詹妮弗能够联系到真正的布莱恩娜,得知她是安全的。 警察对她的报告不感兴趣,称其为恶作剧电话。
剑桥大学心理学教授桑德·范德林登建议人们避免在网上发布旅行计划,并尽可能少地向来电者发送垃圾邮件,以阻止他们捕捉你的声音。 如果您在网上有大量音频或视频片段,您可能需要考虑将其删除。
机器人技术的“ChatGPT 时刻”?
Figure Robot 的创始人 Brett Adcock 周末用小写字母气喘吁吁地发推文说,我们的实验室刚刚取得了人工智能突破,机器人技术即将迎来它的 ChatGPT 时刻。这可能有点过分夸大了。 这一突破在一段一分钟的视频中得以体现,该视频展示了该公司的Figure-01机器人在接受人类10个小时的指导后,自行制作咖啡。
制作咖啡并不是那么具有开创性(当然也不是每个人都印象深刻),但视频声称机器人能够从错误中学习并自我纠正。 因此,当图 01 将咖啡包放错时,它会足够聪明地轻推它,使其进入插槽。 到目前为止,人工智能在自我纠正错误方面表现相当糟糕。
Adcock said: The reason why this is so groundbreaking is if you can get human data for an application (making coffee, folding laundry, warehouse work, etc), you can then train an AI system end-to-end, on Figure-01 there is a path to scale to every use case and when the fleet expands, further data is collected from the robot fleet, re-trained, and the robot achieves even better performance.
Another robot hospitality video came out this week, showcasing Google DeepMind and Stanford Universitys Mobile Aloha a robot that can cook you dinner and then clean up afterward. Researchers claim it only took 50 demonstrations for the robot to understand some new tasks, showing footage of it cooking shrimp and chicken dishes, taking an elevator, opening and closing a cabinet, wiping up some wine and pushing in some chairs. Both the hardware and the machine learning algo are open sourced, and the system costs $20,000 from Trossen Robotics.
Introducing — Hardware!A low-cost, open-source, mobile manipulator.One of the most high-effort projects in my past 5yrs! Not possible without co-lead @zipengfu and @chelseabfinn.At the end, what's better than cooking yourself a meal with the pic.twitter.com/iNBIY1tkcB
— Tony Z. Zhao (@tonyzzhao) January 3, 2024
Full-scale AI plagiarism war
One of the weirder discoveries of the past few months is that Ivy League colleges in the U.S. care more about plagiarism than they do about genocide. This, in a roundabout way, is why billionaire Bill Ackman is now proposing using AI to conduct a plagiarism witch hunt across every university in the world.
Ackman was unsuccessful in his campaign to get Harvard President Claudine Gay fired over failing to condemn hypothetical calls for genocide, but the subsequent campaign to get her fired over plagiarism worked a treat. However, it blew back on his wife Neri Oxman, a former Massachusetts Institute of Technology professor, when Business Insider published claims her 300-page 2010 dissertation had some plagiarised paragraphs.
Read also Features The Road to Bitcoin Adoption is Paved with Whole Numbers Features Why Animism Gives Japanese Characters a NiFTy Head Start on the Blockchain
Ackman now wants to take everyone else down with Neri, starting with a plagiarism review of every academic, admin staff and board member at MIT.“Every faculty member knows that once their work is targeted by AI, they will be outed. No body of written work in academia can survive the power of AI searching for missing quotation marks, failures to paraphrase appropriately, and/or the failure to properly credit the work of others.
Last night, no one at @MIT had a good nights sleep.Yesterday evening, shortly after I posted that we were launching a plagiarism review of all current MIT faculty, President Kornbluth, members of MITs administration, and its board, I am sure that an audible collective gasp
— Bill Ackman (@BillAckman) January 7, 2024
Ackman then threatened to do the same at Harvard, Yale, Princeton, Stanford, Penn and Dartmouth and then surmised that sooner or later, every higher education institution in the world will need to conduct a preemptive AI review of its faculty to get ahead of any possible scandals.
Showing why Ackman is a billionaire and youre not, halfway through his 5,000-word screed, he realized theres money to be made by starting a company offering credible, third-party AI plagiarism reviews and added hed be interested in investing in one.
Enter convicted financial criminal Martin Shkreli. Better known as Pharma Bro for buying the license to Daraprim and then hiking the price by 5,455%, Shkreli now runs a medical LLM service called Dr Gupta. He replied to Ackman, saying: “Yeah I could do this easily,” noting his AI has already been trained on the 36 million papers contained in the PubMed database.
Probably better to do vector search tbh but some rag to go through it case by case. We actually already have all of pubmed downloaded so it's more or less a for loop once you feel good about your main tool.Can calibrate it on Gay's work
— Martin Shkreli (e/acc) (@wagieeacc) January 7, 2024
While online plagiarism detectors like Turnitin already exist, there are doubts about the accuracy and it would still be a mammoth undertaking to enter every article from every academic at even a single institution and cross-check the citations. However, AI agents could potentially systematically and affordably conduct such a review.Even if the global plagiarism witchhunt doesnt happen, it seems increasingly likely that in the next couple of years, any academic who has ever plagiarized something will get found out in the course of a job interview process or whenever they tweet a political position someone else doesnt like.
AI will similarly lower the cost and resource barriers to other fishing expeditions and make it feasible for tax departments to send AI agents to trawl through the blockchain for crypto transactions from 2014 that users failed to report and for offense archaeologists to use AI to comb through every tweet youve made since 2007 looking for inconsistencies or bad takes. Its a brave new world of AI-powered dirt digging.
Two perspectives on AI regulation
Professor Toby Walsh, the chief scientist at the University of New South Waless AI Institute, says heavy-handed approaches to AI regulation are not feasible. He says attempts to limit access to AI hardware like GPUs will not work as LLM compute requirements are falling (see our piece on a local LLM on an iPhone below). He also argues that banning the tech will be about as successful as the United States governments failed efforts to limit access to encryption software in the 1990s.
Read also Features Justin Aversano makes a quantum leap for NFT photography Features North Korean crypto hacking: Separating fact from fiction
Instead he called for vigorous enforcement of existing laws around product liability to hold AI companies to account for the actions of their LLMs. Walsh also called for a focus on competition by applying antitrust regulation more forcibly to lever power away from the Big Tech monopolies, and he called for more investment by governments in AI research.Meanwhile, venture capital firm Andreessen Horowitz has gone hard on the competition is good for AI theme in a letter sent to the United Kingdom’s House of Lords. It says that large AI companies and startups should be allowed to build AI as fast and aggressively as they can and that open-source AI should also be allowed to freely proliferate to compete with both.
A16z’s letter to the House of Lords. (X) All killer, no filler AI news
OpenAI has published a response to The New York Times copyright lawsuit. It claims training on NYT articles is covered by fair use, regurgitation is a rare bug, and despite the NYT case having no merit, OpenAI wants to come to an agreement anyway.
The New Year began with controversy about why ChatGPT is delighted to provide Jewish jokes and Christian jokes but refuses point blank to make Muslim jokes. Someone eventually got a halal-rious pun from ChatGPT, which showed why its best not to ask ChatGPT to make any jokes about anything.
In a development potentially worse than fake kidnappings, AI robocall services have been released that can tie you up in fake spam conversations for hours. Someone needs to develop an AI answering machine to screen these calls.
Introducing Bland Turbo. The world's fastest conversational AI:– Send or receive up to 500,000+ phone calls simultaneously. – Responds at human level speed. In anyone's voice. – Program it to do anything.Call and talk to Bland Turbo now: https://t.co/qm0sUggov3 (1/3) pic.twitter.com/5e6Y3FU9Wh
— Bland.ai (@usebland) January 5, 2024
A blogger with a mixed record claims to have insider info that a big Siri AI upgrade will be announced at Apples 2024 Worldwide Developer Conference. Siri will use the Ajax LLM, resulting in more natural conversations. It will also link to various external services. But who needs Siri AI when you can now download a $1.99 app from the App Store that runs the open-source Mistral 7B 0.2 LLM locally on your iPhone?
Around 170 of the 5,000 submissions to an Australian senate inquiry on legalizing cannabis were found to be AI-generated.
More than half (56%) of 800 CEOs surveyed believe AI will entirely or partially replace their roles. Most also believe that more than half of entry-level knowledge worker jobs will be replaced by AI and that nearly half the skills in the workforce today wont be relevant in 2025.
Subscribe The most engaging reads in blockchain. Delivered once a week.
Email address
SUBSCRIBE