Latest

new Of course, digital protection measures such as antivirus software, secure gateways, firewalls, and virtual private networks (VPNs) remain crucial. And, incorporating advanced digital strategies such as machine learning to monitor for behavioural anomalies, provides an added layer of security. Leadership teams should also assess whether similar approaches have been applied to address any physical vulnerabilities. For example, a combination of manned entry points, locked facilities, cameras, and security alarms offers robust protection. It’s unlikely that a physical intrusion will occur simply to steal a laptop. Instead, these malicious actors commonly look for a way to access data or install malware inside the organisation’s physical perimeter where some protections may be lacking.IT Brief New Zealand, 8h ago
new Change is never easy, particularly large organisational shifts like the ones that can be expected through the adoption of generative AI. The nature of some roles may change, along with established methods that many employees will have become accustomed to over the years. However, with the proper learning and development initiatives in place, such as group coaching, the collaborative mindset can be continued long term and steps can be taken to break down organisational silos.The European Business Review, 15h ago
new Recently, automated stores that use only kiosks without staff have been increasing without checking the accessibility level of the information-alienated class, and it is expected that the difficulties of the information-alienated class will increase further. In addition, as the number of consumers in their 20 and 30s who avoid face-to-face interactions and prefer kiosks increases, non-face-to-face marketing in offline stores is becoming a trend spreading to various industries. To meet this need, CN.AI explained that it developed the AI Human with the idea that introducing a friendly kiosk that can be conveniently utilized even by older people would improve customer satisfaction across all ages.Journal of Cyber Policy, 19h ago
new For all of the bravado of AI, before we become too dazzled or dismayed (depending on which side of this issue you reside), at AI’s potential impact on projects, it is worthwhile to sit back with a skeptical bird’s eye view. After all, as we humans evolved throughout the information age of the mid to late twentieth century, AI can just be considered the latest development in a long series of technological advances driven by the engine of human information input. Even if every one of the improvements stated above are tangible, how well does it translate to the ultimate outcome? Organizations around the globe have been trying to digitalize and computerize, some as early as the 1960s and 1970s. Digitization and computerization promised a revolution in productivity through automation, streamlining, and making humans redundant. But the road to a more significant productivity boost has been arduous at best. One of the primary reasons is captured in the Theory of Constraints (TOC) developed by Eliyahu M. Goldratt, an Israeli physicist turned business management guru. In his landmark book The Goal, which sold over 6 million copies, he explains the importance of focusing on the right constraints to improve the overall system. For projects, the constraints are generally not within the technology and tools, but within humans.Healthcare Business Today, 1d ago
new AI systems are often presumed to be completely impartial and neutral. Artificial intelligence acquires biases that exist in the data it is trained on. Developers must actively strive to reduce biases, however achieving total impartiality remains a difficult task.techgig.com, 1d ago
new Continually remind employees to apply their judgment over the long term. Vahe Andonians, chief technology officer and chief product officer of Cognaize, said that we need humans to apply their judgment to AI outputs even when we no longer need to be checking its facts. “I'm going to steal from Nietzsche,” he said, “It should be humans because we are the only ones that can suffer. AI is not going to suffer…so the judgment layer should be us.” George Lee, co-head of applied innovation at Goldman Sachs, added that one thing his team has been focused on is how to encourage employees to keep a sharp eye, even when the AI system is performing well: “After the 10th experience of it looking just great, are you going to pay attention?” (A recent study on BCG consultants, covered by Charter, illustrated the danger of employees ‘switching off their brains’ when working with impressive AI systems.)...Charter, 1d ago

Latest

new Incorporating bioethics principles into medical AI algorithms is undoubtedly a crucial aspect of creating trustworthy technology that serves to advance society. While the paper highlights multiple critical topics that should be considered and offers robust solutions, gaps and questions remain. More research needs to be done regarding when these AI tools are considered to be conscious of their own actions and, by extension when liability is placed on humans or the machines. Additionally, while the authors’ proposed solution of instituting liability fees and other systems between public and private parties is an interesting point to consider, it may be difficult to establish in countries such as the United States, where the healthcare system is incredibly disjointed. Additionally, in places with many competing private healthcare companies, it should be considered that these parties do not necessarily have patients’ best interests at heart. Instead, they tend to prioritize profits over patient well-being—thus adding another obstacle to ensuring ethical medical AI is instituted.Montreal AI Ethics Institute, 1d ago
new Since the beginning of time, humans have been exploring the best ways to progress and make life easier for themselves. Nevertheless, fear of technology has always been a part of the human condition. This was evident when electricity was discovered and how American society was reluctant to make it widely available. Artificial intelligence (AI) can be beneficial when used properly, but as I’ve always believed that too much of anything is bad, one must find a balance in how much they rely on AI technology. Thoughtful reasoning is what gave rise to AI, so, I will advise finding that balance, but my position doesn’t change. We’ve progressed, and the competition is real. If you don’t use AI, you won’t be able to compete with those who do, and you’ll become outdated.The Guardian Nigeria News - Nigeria and World News, 1d ago
new Policymakers are increasingly considering risk-based assessments of AI systems, such as the EU AI Act. We believe that in this context, AI systems with the potential for deception should be classified at least as “high-risk.” This classification would naturally lead to a set of regulatory requirements, including risk assessment and mitigation, comprehensive documentation, and record-keeping of harmful incidents. Second, we suggest passing ‘bot-or-not’ laws similar to the one in California. These laws require AI-generated content to be accompanied by a clear notice informing users that the content was generated by an AI. This would give people context about the content they are viewing and mitigate the risk of AI deception.Montreal AI Ethics Institute, 1d ago

Top

I think Anthropic just put out a blog post. They’d worked closely with the Collective Intelligence Project that I mentioned earlier doing a similar thing for developing their constitutional AI, defining what the constitutional principles are that AI systems should be aligned to, and that was more of a democratic governance process. So actually, I think we’re starting to see this terminology of “AI democratization”, “democratizing AI” seep into the technical landscape a bit more. I think that’s a very positive thing. I think it’s showing that they’re starting to tap into and really embody the democratic principles of democratizing AI. As opposed to just meaning distribution and access, it actually means reflecting and representing the interests and values of stakeholders and impacted populations.alignmentforum.org, 8d ago
However, for less capable AI systems, ones not powerful enough to run a good utilitarian value function, a set of deontological ethical heuristics (and also possibly-simplified summaries of relevant laws) might well be useful to reduce computational load, if these were carefully crafted to cover the entire range of situations that they are likely to encounter (and especially with guides for identifying when a situation was outside that range and it should consult something more capable). However, the resulting collection of heuristics might look rather different from the deontological ethical rules I'd give a human child.alignmentforum.org, 26d ago
First and foremost, the reference to “sufficiently detailed summary” must be replaced with a more concrete requirement. Instead of focussing on the content of training data sets, this obligation should focus on the copyright compliance policies followed during the scraping and training stages. Developers of generative AI systems should be required to provide a detailed explanation of their compliance policy including a list of websites and other sources from which the training data has been reproduced and extracted, and a list of the machine-readable rights reservation protocols/techniques that they have complied with during the data gathering process. In addition, the AI Act should allocate the responsibility to further develop transparency requirements to the to-be-established Artificial Intelligence Board (Council) or Artificial Intelligence Office (Parliament). This new agency, which will be set up as part of the AI Act, must serve as an independent and accountable actor, ensuring consistent implementation of the legislation and providing guidance for its application. On the subject of transparency requirements, an independent AI Board/Office would be able to lay down best-practices for AI developers and define the granularity of information that needs to be provided to meet the transparency requirements set out in the Act.COMMUNIA Association, 27d ago
Critically, the memo includes definitions of safety- and rights-impacting AI as well as lists of systems presumed to be safety- and-rights impacting. This approach builds on work done over the past decade to document the harms of algorithmic systems in mediating critical services and impacting people’s vital opportunities. By taking this presumptive approach, rather than requiring agencies start from scratch with risk assessments on every system, the OMB memo also reduces the administrative burden on agencies and allows decision-makers to move directly to instituting appropriate guardrails and accountability practices. Systems can also be added or removed from the list based on a conducted risk assessment.Brookings, 18d ago
Since testing extends to fairness and non-discrimination, security teams need to ensure that their AI systems are fair. This isn't something security teams have focused on in the past, so there will likely be a learning curve. It may mean implementing new testing procedures to check for bias in an automated fashion.wiz.io, 12d ago
Errors in input data uncorrelated to outcomes (e.g. random lens flare in image data) become critical in applications where inaccuracy is only acceptable within tight margins (e.g. self-driving vehicles), and much less so if AI is used as a complementary system that recommends actions to a human decision maker. Further, applications have varying tolerance for type I and type II errors (Shafer and Zhang 2012: 341). Consider the trade-off between freedom of speech and illegal content in content moderation systems. Avoiding type I errors (not removing illegal content) seems preferable to avoiding type II errors (removing lawful content). However, these prediction errors can only be classified if the ground truth is at least approximately known.CEPR, 12d ago

Latest

new Diabetic retinopathy (DR) is one area where AI is being used as a screening tool. According to Dr Gopal Pillai, professor and head of department of ophthalmology at Amrita Institute of Medical Sciences in Kochi, approximately one third of people with diabetes will have DR in India, and about 10-12 per cent of those will have vision-threatening DR, indicating that the development of DR has seriously affected the patient’s vision and failure to treat it in a timely manner will result in irreversible vision loss. One thing that was hampering early detection of DR was that it is asymptomatic, until, that is, the person completely loses his vision. “One might be driving, reading, watching TV and doing their activities without even knowing that a time bomb is ticking inside the eye. And because the patient would often come in late, there was no way of early diagnosis,”says Pillai, who is leading a government-sponsored clinical trial network.theweek.in, 1d ago
new Now there's a lot between here and there. How do we know that's true? And I want to talk about that later. But that's something we can imagine, delegating to a machine as opposed to delegating to a human. There'll be upsides, there'll be downsides. I think this is also something that will change with time. We might not be comfortable with today, in five years we might be. Do we want an AI to act as a court jury? We use human jurors for historical reasons, nothing else worked. Human jurors are fallible, they're influenceable, they have prejudices, they're biased, all sorts of problems with them, but all sorts of good things about them. AIs in the place of jurors, they're going to have biases and problems and all sorts of things. But will they do a better job today, tomorrow, in five years? I don't know. But my guess is that over the next several decades there will be tasks that today we say, "No, a human must perform them." Now, in the future you might say we're okay with an AI doing it. I think you ask an important question of, "How do we know what its values are? How do we figure it out?" Well, in some way it's no different than people. How do I know your values? I could ask you, you might be lying. I could observe you in practice. You might be deceiving me. But over the course of time, if we are friends, I will get to learn your values. And I think the same is going to be true for an AI. We can ask it. Maybe it'll tell us the truth. Hopefully it will. We could watch it in practice. And over the course of time working with an AI, we might come to trust it more because we have watched it implementing its values and practice.harvard.edu, 2d ago
new Spray adds agencies that want to innovate with AI must also be willing to accept risk and failure, and be diligent when it comes to compliance. “The reality is that adopting new technology is not going to be a smooth road,” she says. “We have to consider things from lots of different perspectives, from legal to regulatory to ethics. How do you create those governance processes – do you have them already or do you need to do that now?”...Cision, 2d ago
new The contract, which is being voted on and needs a majority of “yes” votes to come into force, was considered a victory by many and received with suspicion by others. In the initial vote, 14% of the members of the union's national board voted “no”, and that is what Portuguese actress Kika Magalhães intends to do as well. “The reason is that they don't protect actors in relation to digital replicas”, she told Lusa the actress, based in Los Angeles since 2016. “They say yes, that there is protection, but then we look between the lines and there is nothing.” Kika Magalhães, whose latest film, “The Girl in the Backseat”, has just been released reaching Amazon Prime Video and the streaming platform Tubi, points to how digital replicas can be disastrous. “An actor goes for a casting and the producers ask if he will accept their digital replica. If the actor says no, they may not give him the role,” she explains. Top-notch actors will be able to negotiate and say no without losing the role. “But small actors like us don't bring as much money to the union and they don't protect us as much”, considered Kika Magalhães. The actress doubts the solution put forward by one of the clauses, according to which if a studio uses digital replicas of an actor this You will be paid corresponding to the hours you would be filming. “This is very relative, because a scene can take a month to film. They can say it took a day to make.” Actress Justine Bateman also criticized loopholes that allow studios to use digital replicas without actors' consent when certain conditions are met. The results of the votes will be known on December 5th. If there are 50%+1 “yes” votes, this contract will come into force for the next three years. “I have heard many actors saying that they will vote no”, said Kika Magalhães. Her husband, actor Chris Marrone, said that “if the majority fully understands what they are signing, then they vote no.” Marrone considered that the SAG contract “doesn’t seem like a big victory after all” and that there should be specific language to define the actors as human beings. This is something that actress Katja Herbers also defends, in opposition to “synthetic actors”. However, the expectation is that the “yes” will win, because the industry has been at a standstill for too long and there is widespread fatigue. This is what Mário anticipates Carvalhal, who belongs to the Animation Guild, stressing that the stoppage was long and the “no” appears to be a minority. “There is a possibility that some people will vote no, but I believe that these new measures will pass and be approved,” he told Lusa. “I think it is a minority that is very right in what they are demanding, but it was practically a whole year of work stopped in this city and I think everyone is ready to move forward”. Mário Carvalhal considers that the big risk of AI will be the reduction in quality and a change in the way the environment works. “Actors have more to claim, especially when it comes to those who do voices. There have already been cases where AI can do the job,” he said. “It's an inferior job, but for many companies it's enough and doesn't cost them anything.” Carvalhal considers that actors “must maintain their rights to image, voice and everything else, their likeness.” The Portuguese also stressed that, although the strikes did not achieve all their objectives, they allowed “important steps in the right direction” to be taken and this is an aspect of which the strikers are proud. “As much as possible, I think the workers won this fight”, he considered. For screenwriter Filipe Coutinho, member of the Portuguese Cinema Academy, the unions were justified in their fight, which took longer than expected. “I'm quite satisfied for the way both the WGA and SAG acted over these six months”, he told Lusa. “It’s an unbelievable time to have an entire industry at a standstill,” he stressed. “California is one of the largest economies in the world and it is incomprehensible that it took so long for the studios to offer a fair contract to writers and actors.” Filipe Coutinho also said that, even with the agreements, “everything is a little upside down. the air”, with studios and production companies “trying to understand what the next phase will be”. The Portuguese mentioned changes in the business model, with 'blockbusters' expected to fail at the box office, cancellation of films and the dilemma of 'streaming '.“No one really knows what to invest in and under what conditions to invest, and now contracts also change the approach to content production.” Afonso Salcedo, lighting artist, who worked on the new Disney film “Wish – The Power of Desires”, considers that the strikes were difficult but important, at a time when it is not yet clear to what extent AI will affect the industry. “The agreements will last three years so I think it is a good step to see what it is like that the technologies will work in the coming years”, he indicated, noting that the animation segment will have to renegotiate the contract in 2024. “It will be interesting to see what will happen, if we are going to negotiate protections against Artificial Intelligence”, stated Afonso Salcedo. “Maybe, next year, we will get into these fights with the studios again.” The vote on the agreement reached between the SAG-Aftra union and the studios runs until December 5th. The results will be tabulated and published on the same day.adherents, 2d ago
new If we could observe and modify everything that’s going on in a human brain, we’d be able to use optimization algorithms to calculate the precise modifications to the synaptic weights which would cause a desired change in behavior. Since we can’t do this, we are forced to resort to crude and error-prone tools for shaping young humans into kind and productive adults. We provide role models for children to imitate, along with rewards and punishments that are tailored to their innate, evolved drives. In essence, we are poking and prodding at the human brain’s learning algorithms from the outside, instead of directly engineering those learning algorithms.alignmentforum.org, 2d ago
new Advancements in biometrics, smartphones, and document recognition have been game-changers for balancing security and convenience. More and more, banks will be able to build filters that make it harder for bad actors while easier for good ones. It's important to have the latest and best technology possible making sure that hurdles aren't the same height for good actors and bad actors. For instance, bots armed with AI can breeze through knowledge questions and form fills. However, biometric tech makes it simple for real people to snap ID photos but extremely tough for bots. With the right innovations, complexity can be removed for consumers while scrutinizing bad actors more effectively. The ideal system has just enough friction to deter fraud without frustrating users. By leveraging cutting-edge solutions, banks can eliminate hassles while enhancing security.Financial IT, 2d ago

Top

BUOLAMWINI: ...A product, you know, he had never even heard of. And so we see these algorithms of exploitation that are taking our actual essence. And then we also see the need for civil rights and human rights continue. And so it was very encouraging to see in the executive order that the principles from the blueprint for an AI bill of rights such as protections from algorithmic discrimination, that the AI systems being used are effective, that there are human fallbacks, were actually included, because that's going to be necessary to safeguard our civil rights and our human rights.NPR, 5d ago
Sections 7.1 and 7.3 of the Order specifically address concerns about discrimination in the workplace. For example, Section 7.1 directs the Assistant Attorney General for the Department of Justice, Civil Rights Division, to coordinate with other Federal civil rights offices to discuss how to “prevent and address discrimination in the use of automated systems, including algorithmic discrimination.” Section 7.3 directs the Secretary of Labor to publish guidance for federal contractors “regarding non-discrimination in hiring involving AI and other technology-based hiring systems.” As employers increasingly implement AI tools and strategies, such as to make recruiting and hiring more efficient by drafting job descriptions, screening applicants, identifying key job functions, and more, they will likely be governed by a new regulatory scheme with heightened scrutiny of their use of such tools and of the due diligence employers engage in to validate those tools and to ensure legitimate and unbiased predictive outcomes.natlawreview.com, 26d ago
Generative AI can offer useful tools across the recruiting process, as long as organizations are careful to make sure bias hasn’t been baked into the technology they’re using. For instance, there are models that screen candidates for certain qualifications at the beginning of the hiring process. As well-intentioned as these models might be, they can discriminate against candidates from minoritized groups if the underlying data the models have been trained on isn’t representative enough. As concern about bias in AI gains wider attention, new platforms are being designed specifically to be more inclusive. Chandra Montgomery, my Lindauer colleague and a leader in advancing equity in talent management, advises clients on tools and resources that can help mitigate bias in technology. One example is Latimer, a large language model trained on data reflective of the experiences of people of color. It’s important to note that, in May, the Equal Employment Opportunity Commission declared that employers can be held liable if their use of AI results in the violation of non-discrimination laws – such as Title VII of the 1964 Civil Rights Act. When considering AI vendors for parts of their recruiting or hiring process, organizations must look carefully at every aspect of the design of the technology. For example, ask for information about where the vendor sourced the data to build and train the program and who beta tested the tool’s performance. Then, try to audit for unintended consequences or side effects to determine whether the tool may be screening out some individuals you want to be sure are screened in.Hunt Scanlon Media, 3d ago

Latest

new California’s report also usefully lays out the potential risks associated with using these new tools, making clear that while there are some new potential harms, in many cases many of the risks are common to the use of any technology. Governments need to be conscious of the fact that tools that enable the easy generation of high-quality content could be misused to dupe consumers and residents.Perhaps because 35 of the 50 leading AI businesses are in California, as the state's report points out at the outset, it is silent on the risks to governments and those they serve of relying excessively on technologies developed and governed by unaccountable companies, especially when those technologies are procured by public servants without a deep knowledge of the tech.GovTech, 2d ago
new The year 2024 is anticipated to witness a surge in the occurrence of Generative AI-based cybersecurity attacks, primarily deepfake attacks facilitated by Generative AI. This poses a significant cybersecurity threat, as Generative AI can also be employed for phishing and social engineering attacks. Besides, attackers can exploit Generative AI techniques to develop new variants of malware that can evade detection by traditional security measures. It is essential to note that Generative AI models are susceptible to data poisoning and adversarial attacks, making them vulnerable to exploitation. Overall, this implies that 2024 will be marked by an increased risk of Generative AI-based cyber threats, which can cause severe harm to businesses and industries.Thinkers360 | World’s First Open Platform For Thought Leaders, 2d ago
new ...4. Peer-checked: While all the the above virtue could be performed by a single individual, chances of misinterpretation of incoming external intel, lower quality output or simply genuine mistakes is drastically reduced if 4 eyes principle are applied (ideally with another TI analyst), or in smaller team in collaboration with a DE sitting at the interface with TI, helping to review and document new threats with the TI analyst. After all, the cyber world is human-driven (sorry, AI…)...Security Boulevard, 2d ago
new The AI floodgates opened in 2023, but the next year may bring a slowdown. AI development is likely to meet technical limitations and encounter infrastructural hurdles such as chip manufacturing and server capacity. Simultaneously, AI regulation is likely to be on the way.This slowdown should give space for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful.ChatGPT and other generative AI systems will settle into people’s workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools.But the outlook for 2024 isn’t completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn’t likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content.Political misinformation spread across social media in 2016 as well as in 2020, and it is virtually certain that generative AI will be used to continue those efforts in 2024. Even outside social media, conversations with ChatGPT and similar products can be sources of misinformation on their own.As a result, another lesson that everyone – users of ChatGPT or not – will have to learn in the blockbuster technology’s second year is to be vigilant when it comes to digital media of all kinds.Tim Gorichanaz, Assistant Teaching Professor of Information Science, Drexel UniversityThis article is republished from The Conversation under a Creative Commons license. Read the original article.GovTech, 2d ago
Gillibrand said the legislation is an important step forward in the effort to deter illegal robocalls."Don't dial if you don't want to go to trial," the Democrat said. "But, there's still more we need to do to address the rise of generative AI. I'm sending a letter to the chair of the Federal Trade Commission requesting information about its work to track the increasing use of artificial intelligence to perpetrate frauds and scams against older Americans. While public reporting indicates that more families are being targeted by voice clones in family-emergency scams, the number of Americans targeted by scammers using generative AI remains unknown."Earlier this month, the Federal Communications Commission announced it will pursue an inquiry to study the impact of artificial intelligence on robocalls and robotexts and is evaluating how it can also use AI technology to combat the problem.Gillibrand said she hopes to get both Republican and Democratic co-sponsors to push the bill forward, as people on both sides of the aisle are alarmed by the incidents. Gillibrand advised New Yorkers, especially older residents, to be cautious and aware of the problem. She said she's also weighing other legislation that would create a responsibility for banks and tellers to ask a set of standardized questions if an elderly person goes to a bank and wants to take out, say, $10,000 when that is not a usual practice."If [they have] never done that before, to have a series of questions that the teller can ask to say, 'Are you taking this out for a reason? Is there an emergency? Have you verified the emergency with a loved one? Would you like me to help you verify the emergency?'" Gillibrand explained. "I want to come up with some legislation to focus our tellers on good questions they can ask that don't violate their privacy or make them feel unsure of themselves or insecure, but just protective questions."© 2023 The Daily Gazette, Schenectady, N.Y. Distributed by Tribune Content Agency, LLC.GovTech, 3d ago
new Companies that do not adhere to the new laws may be fined and their products may be withdrawn from circulation in the EU. The Cyber Resilience Act is part of the EU’s wider plan to crack down on what it sees as threats to safety and human rights presented by Big Tech. Leo Moore, partner and head of technology at law firm William Fry recently told SiliconRepublic.com how the bloc is enforcing various regulations to ensure tech companies are being held responsible for things such as the use of personal data, AI and cybersecurity.Silicon Republic, 2d ago

Top

What does this mean in practice? It means that cyber security and disinformation, which are already prominent and incredibly challenging features of modern war, will become even more of a problem in conditions of intensive automation. Adversaries have incentives to manipulate or poison the data that feeds AI systems.78 AI will thus expand the range of counterintelligence risks to worry about. It also means that adversaries have incentives to move conflict in unexpected directions, i.e., where AI systems have not been trained and will likely perform in undesired or suboptimal ways. This creates not only data problems but judgment problems as well. Combatants will have to reconsider what they want in challenging new situations. As intelligent adversaries escalate conflict into new regions, attack new classes of targets, or begin harming civilians in new ways, how should AI targeting guidance change, and when should AI systems be withheld altogether? We should expect adversaries facing AI-enabled forces to shift political conflicts into ever more controversial and ethically fraught dimensions.Texas National Security Review, 26d ago
It led to concerns about monopolies, worker safety, unfair wages and child labor. It produced the weapons that were used to fight two world wars. In short, it wasn’t just about technology that could be used for good and I’m grateful. Our first witness, Daron Acemoglu, has studied this phenomenon. He has not only examined the history of technological change, but also the democratic institutions that are needed in response. In the 20th century, we had trade unions to protect workers’ rights and effective government regulation to keep those industries in check. What tools do we need to meet this moment and what else should we learn from the history? Artificial intelligence also brings unique challenges. The history of technological change has largely centered on human strength and how we can augment it through the use of new machines. AI will affect physical work, but unlike other technologies, it is more directly tied to our intellectual and cultural capacities.Tech Policy Press, 25d ago
These kinds of manipulated videos are already affecting students in our schools. In one instance, a group of high school students in New Jersey used the images of a dozen female classmates to make AI pornography. This is wrong. It threatens lives and self-esteem among young people, and it needs to be stopped. Earlier this year, ranking member Joe Morrell introduced a bill called the Preventing Deepfakes of Intimate Images Act. The Bill bans non-consensual images. The order instructs the Secretary of Commerce, whoops, I’m sorry, of sharing synthetic intimate images and creates additional legal courses of action For those who are affected, I’m a co-sponsor to that legislation and urge all of my colleagues to join us in this important effort. Congress must not shy away from preventing harmful proliferation of deep fake pornography, but it’s not just deep fake videos that we have to worry about. With ai, scammers have the ability to easily create audio that mimics a person’s voice matching age, gender, and tone.Tech Policy Press, 24d ago
...“I know in our district, our special ed kids were the first ones to have iPads, and they were the kids that looked a little bit different because they were the only ones in the room with tech,” she said. “I worry about teachers removing tech to deal with AI, and now suddenly our special ed kids who don’t have tech written into their IEPs (individualized education programs) anymore are losing an accommodation, so that’s what I’ve been telling teachers, is, ‘You can’t go back to paper.’”Lewsadder said she also created an AI Slack channel in her tech team’s workspace so anyone interested could post information, continue learning and invite teachers.“In technology, we’ve learned that you don’t have to know all the answers, you have to know how to get them,” she said. “If we can get that message to our educators, they will feel more confident with technology tools, so that’s something else to think about.”Several useful pieces of information emerged, including that many people lack general knowledge about data privacy with GenAI. Lewsadder said many students were shocked to learn that if they had entered their resumes into ChatGPT, they gave away that personal information. This prompted her to send emails to families and encourage regular communication about these issues among staff.“It’s a quick message. I think you just do PA announcements, you send it to the parents, the teachers can say these things — it doesn’t have to be a revolutionary change, but getting the message out about the risk is really important for the kids,” she said.Jennette Vanderpool, an education strategist with the ed-tech service provider CDW Education, said another reason to maintain open conversation among faculty is that they use different tools, and some have better content restrictions than others. For example, ChatGPT will give a user the recipe for a Molotov cocktail, while Merlyn Mind will not.“We’re really waiting for Google and Microsoft to get on board with the educational safety component,” she said.Vanderpool added that teachers need to know these things so they can set their own personal classroom rules accordingly — defining which tools they want to use, amending their syllabus language with basic guidelines, and teaching students APA and MLA citation rules for generative AI.“We can’t tell teachers how to teach, we can’t tell them how to grade, but if we create policy around AI usage, then it can be on the teacher’s plate to define how specifically they want it written in their syllabi,” she said.Paradoxically, for all its risks, another key hurdle in adopting GenAI is the student tendency to view it through a negative lens, which won’t be helpful to them entering a workforce that has integrated with the technology.“The kids don’t really see it as something they’re supposed to be doing, so they have this negative connotation, and I think that’s one thing we all really have to fight to break,” Lewsadder said.GovTech, 3d ago
Some of the questions are about facts, such as “How or where do developers of AI models acquire the materials or datasets that their models are trained on?” Other questions seek the public’s views on topics, including those that are being hotly debated or have been the focus of recent lawsuits against AI companies, such as whether the unauthorized use of copyrighted works to train AI models is fair use, whether consent from copyright owners should be needed to use their works for training of AI models, whether AI-generated outputs would implicate the exclusive rights of preexisting copyrighted works, whether substantial similarity test is still the appropriate test to address claims of infringement based on outputs of generative AI, how the prohibition against removal or alteration of copyright management information under the Digital Millennium Copyright Act (DMCA) would apply to the output of a generative AI system trained on copyrighted works containing copyright management information, and whether Congress should establish a new federal right of publicity that would apply to AI-generated material.natlawreview.com, 20d ago
Even if we take out the existential risks of Artificial General Intelligence (AGI) coming to life to kill us, there are still plenty of real risks to worry about. The big ones that grab our attention won't be about practical and everyday threats like AI bias, inequity, civil rights violations, and pollution. They will be political red meat baiting topics like the use of these tools for child sexual abuse, terrorist attacks, or to empower the ‘Chinese threat,’ which we need to regulate to protect our values. This trajectory is already changing the landscape of encryption standards, finance, and reporting practices that could adversely impact privacy, autonomy, and civil rights.diginomica, 24d ago

Latest

Even aside from whether individual researchers intend to pursue ectogenesis, the past thirty or so years have taught us that scientific innovation can move from futuristic to commonplace with extraordinary speed. Those of us who were kids in the 1990s remember the sudden transition from a handful of classmates having screeching dial-up internet to a smartphone in every hand. Researchers well know that technological breakthroughs often lead to very different ends than what they originally intended. Scientific progress frequently outpaces our regulatory systems—and even our imaginations. The truth is that we have been dreaming of artificial-womb technology since rows of warming boxes prompted the rumour that babies could be cultivated like flowers in greenhouses. But now that we are finally reaching the scientific capacity to create an artificial womb, the question is no longer “Is this innovation possible?” The question is: Are we ready?...The Walrus, 3d ago
In recent years, Geospatial Data Science – the use of geographic knowledge and AI approaches to extract meaningful insights from large-scale geographic data – has achieved remarkable success in spatial knowledge discovery and reasoning, and geographic phenomena modeling. However, two challenges remain in geospatial data science: (1) geographic phenomena are always treated as functions of a set of physical settings, but human experience has received insufficient attention; (2) there are limited strategies to focus on and address geoethical issues. In this talk, Dr. Kang will present a series of works that utilized geospatial data science to understand human experience and sense of place. In particular, using large-scale street view images, social media data, human mobility data, and advanced GeoAI approaches, he measured and analyzed human subjective safety perceptions (e.g., whether a neighborhood is perceived as a safe place), and emotions (e.g., happiness) at places, as well as human-environment relationships. Also, his work paid attention on geoethical issues such as monitoring perception bias and model bias and protecting geoprivacy.nyu.edu, 3d ago
AI-powered surveillance systems are gradually becoming common and with this most of us have started to worry with respect to privacy and safety. For example, in China, they use facial recognition technology to watch people closely while in the United States the police use algorithms to predict where crimes might happen. These technologies could violate the personal freedoms of people and make inequalities in society even worse. In simple words, they might invade our privacy and make the gaps between different groups of people even bigger.Techiexpert.com, 3d ago

Latest

We have got this far with no financial debt incurred. Everything has been paid for by the counselling business. I also have a number of extraordinary people and companies to thank for giving up their time to support the business. However, that goodwill can only go so far. We now need funding to support the marketing effort and to fund further developments of the app. We want to introduce AI and include other features that I know I needed. I want to make the app available for everyone who needs it (which means addressing a number of legal and compliance issues). 1 in 3 women and 1 in 6 men will experience this type of abuse in their lifetimes.TechRound, 3d ago
It’s not a single linear solution to prepare for AI. It is a whole continued process to make sure you have the right things in place to stand on so you have eyes in the air for new announcements with partners or new regulations or policies to guide you. How do orgs see themselves positioned with that? Approaches can help them understand when opportunity comes up, whether or not they should participate.American Press Institute, 3d ago
...‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza…The Israeli army’s expanded authorization for bombing non-military targets, the loosening of constraints regarding expected civilian casualties, and the use of an artificial intelligence system to generate more potential targets than ever before, appear to have contributed to the destructive nature of the initial stages of Israel’s current war on the Gaza Strip, an investigation by +972 Magazine and Local Call reveals….The bombing of power targets, according to intelligence sources who had first-hand experience with its application in Gaza in the past, is mainly intended to harm Palestinian civil society: to “create a shock” that, among other things, will reverberate powerfully and “lead civilians to put pressure on Hamas,” as one source put it….Several of the sources, who spoke to +972 and Local Call on the condition of anonymity, confirmed that the Israeli army has files on the vast majority of potential targets in Gaza — including homes — which stipulate the number of civilians who are likely to be killed in an attack on a particular target. This number is calculated and known in advance to the army’s intelligence units, who also know shortly before carrying out an attack roughly how many civilians are certain to be killed.Mondoweiss, 3d ago
This research agenda focuses on self-improving systems, meaning systems that take actions to steer their future cognition in desired directions. These directions may include reducing biases, but also enhancing capabilities or preserving their current goals. Many alignment failure stories feature such behaviour. Some researchers postulate that the capacity for self-improvement is a critical and dangerous threshold; others believe that self-improvement will largely resemble the human process of conducting ML research, and it won't accelerate capabilities research more than it would accelerate research in other fields.lesswrong.com, 3d ago
There are various ways in which he thinks we could get this wrong. In AI, there’s the possibility that you end up with self-generating solutions that turn out not to be beneficial for wider humanity, a race to the bottom between AI-fuelled machines or the risk of weaponisation – it could be literal weaponisation – of these tools to go after somebody else or another state. Part of his warning is that accidents happen when humans are involved in doing this stuff. We do not necessarily get things right all the time, which brings us back to our books on failure.Five Books, 3d ago
...stranger: (aside) Most writers have a hard time conceptualizing a character who's genuinely smarter than the author; most futurists have a hard time conceptualizing genuinely smarter-than-human AI; and indeed, people often neglect the hypothesis that particularly smart human beings will have already taken into account all the factors that they consider obvious. But with respect to sufficiently competent individuals making decisions that they can make on their own cognizance—as opposed to any larger bureaucracy or committee, or the collective behavior of a field—it is often appropriate to ask if they might be smarter than you think, or have better justifications than are obvious to you.lesswrong.com, 3d ago

Latest

Just like technology, policies can quickly become outdated. They must be revised, replaced, or even removed. Although this isn’t the most exciting area of CISO work, creating clear policies that are proactive and empowering, not restrictive, can ensure employees gain the benefits of new technology without the risk. For example, generative AI (GenAI) can offer enormous benefits for a company — improved productivity, efficiency, and creativity. But without appropriate guardrails to govern how the technology is used and what data (or code) can be input into GenAI models, a company could be at extreme risk for compromise. Creating a formal policy with input from stakeholders throughout the company enables employee use of the technology while reducing risk.securitymagazine.com, 3d ago
The group engaged in research discussions surrounding the equity of systems that rely on AI and automated decisions. Presenters and participants emphasized the idea that systems need to be designed to combat bias from the beginning to avoid creating issues for end users.clarkson.edu, 3d ago
No experiment I could possibly design today is more valuable than preserving the opportunity to pose a new experiment tomorrow, next year, or in a decade. My cohort of scientists has come up inspired by imagining what it was like for contemporaries of Darwin to encounter and compare global wildlife, or during the modern synthesis, as the invisible internal mechanisms of evolutionary genetics unfurled. Now, we stare down the prospect that, during our turn, we will have to watch the biosphere die. I have peers who set out to study ancient mass extinction events only to find that the conditions that precipitated ancient mass extinction events aptly describe events now. I have contemporaries who set out to discover new species by recording sounds in the rainforest, only to capture an eerie transition toward silence. I've done very little field work and I study hardy, laboratory-tractable species that aren't endangered or picky about where they live, but even I stopped finding butterflies at my best collection site after wildfires. In my 10 years in science, I think I've never been to any research conference, on any topic, without hearing my colleagues interject dire warnings into their presentations – and I've never attended a climate-focused conference. So, the most important research question is ‘will the species I hope to study – and a stable international society that can support research activity as I've known it – survive the next 50 years?' With that in mind, with ‘unlimited’ funding, the best thing I can imagine doing for science is to fight. I think of legal support for climate protesters; cultivating honest communication platforms that bypass corporatized media; criminalizing ecocide; eliminating fossil fuels fast; protecting democracy against regulatory capture; buying out and defending the recommended 30% of Earth's surface as nature reserves; facilitating socially just transitions to safely support humans in the remaining land.The Company of Biologists, 3d ago
The opaqueness in the decision-making process of LLMs like GPT-3 or BERT can lead to undetected biases and errors. In fields like healthcare or criminal justice, where decisions have far-reaching consequences, the inability to audit LLMs for ethical and logical soundness is a major concern. For example, a medical diagnosis LLM relying on outdated or biased data can make harmful recommendations. Similarly, LLMs in hiring processes may inadvertently perpetuate gender bi ases. The black box nature thus not only conceals flaws but can potentially amplify them, necessitating a proactive approach to enhance transparency.unite.ai, 3d ago
Consumers are concerned AI will take the "human" elements out of health care, consistently saying AI tools should support rather than replace doctors. Often, this is because AI is perceived to lack important human traits, such as empathy. Consumers say the communication skills, care and touch of a health professional are especially important when feeling vulnerable.medicalxpress.com, 3d ago
Jim Davis: I do not see pushback from the existing workforce. There is generally high interest in engaging, as Dan says. I will add, though, that workers need to be aware, see leadership interest, and have access to the capabilities. We still see the gap widening between the larger companies that have the resources and those that don’t, and we still don’t see that manufacturing is viewed as a particularly high-tech profession from a future workforce standpoint. I did want to mention there can be workforce concerns with privacy and personal intrusion if AI is used to monitor individuals. I really like this new term of “co-piloting” that has emerged. It better communicates the value of using AI to support the worker to do his or her job better.The Manufacturing Leadership Council, 3d ago

Latest

Soltani said the agency's board and the public will have opportunities to provide input on the proposed rules starting next month. The guidelines are meant to clarify how the 2020 California Consumer Privacy Act — which addressed a range of electronic and online uses of personal information — should apply to decision-making technology.The proposal also outlines options for how consumers' personal information could be protected when training AI models, which collect massive data sets in order to predict likely outcomes or respond to prompts with text, photo and video.OpenAI and Google already have been sued over their use of personal information found on the internet to train their AI products.GovTech, 3d ago
Addressing the dangers of AI does not require a complete halt or slowdown in development. Instead, regulation and risk management must be proportionate to the level of risk: many kinds of AI require little or no regulation, while other systems require guardrails and incentives calibrated to balance risks and benefits.Centre for International Governance Innovation, 4d ago
As with general AI regulation, there are normative and functional challenges to deploying effective use-case regulation. At its core, use-case regulation inherently requires strengthening consumer and civil rights protections across the board, a challenging endeavor in the current US political climate. Furthermore, coordinating and maintaining consistency across regulations can be difficult, especially if a platform or technology concerns multiple agency jurisdictions, and risks fragmentation potentially leading to more harm and weaker protections than currently available.Tech Policy Press, 4d ago

Top

Now let me be clear - when I say "human", I actually mean a bit more than that. I mean that humans have certain people-y qualities that I enjoy and that I feel make them worth caring for, though they are hard to pin down. I think these people-y qualities are not necessarily exclusive to us; in some measures, many non-human animals do possess them, and I cherish them in those too. And if I met a race of peaceful, artful, friendly aliens, you can be assured that I would not suddenly turn into a Warhammer 40K Inquisitor whose only wish is to stomp the filthy xenos under his jackboot. I can expand my circle of concern beyond humans just fine; I just don't think the basis to do so is simply some other thing's ability to mock or even improve upon some of our cognitive faculties. I am not sure what precisely could be a good description of these people-y qualities. But I think an art generator AI that can spit out any work in any style based on a simple description as a simple prediction operation based off a database probably doesn't possess them; and I think any super-intelligence that would be willing to do things like strip-mine the Earth to its core to build more compute for itself in a relentless drive to optimization definitely doesn't possess them.lesswrong.com, 22d ago
The bigger question is whether we, as a society, welcome the proliferation of algorithmic systems that do not require consent from individuals whose information is being used to create them and whose behaviour they will subsequently monitor. Until recently, AI has existed in a moral and legal vacuum, and this may be the right moment to ensure our core values are protected. Consent is an important issue here.Centre for International Governance Innovation, 24d ago
There’s also the way we find love and romance. Already, AI dating tools are aping online dating, except the person on the other end isn’t a person at all. There’s already one company that has AI doing the early-stage we-met-on-Tinder thing of sending flirty texts and even sexy selfies, and (for a fee) sexting and segueing into being an online girlfriend / boyfriend. Will most people prefer to the warm glow of a phone screen to an actual human? I don’t think so. But enough will to, I suspect, cause a lot of problems. Because while on the one hand it seems find if under-socialized humans can find some affection with a robot, I question whether directing an under-socialized person’s already-limited social skills to a robot designed to always be amenable and positive and accommodating and compliant really serves that person, or anyone who has to be around that person. Interacting with other human beings can be challenging. That’s part of the point: It’s how we learn to regulate our own emotions and consider those of others; it’s how we start to discern which people are our people; it’s how we learn to compromise and negotiate and building layered and meaningful relationships. You can’t get that from AI. But what you can get is a facsimile of a human being who more or less does what will make you happy in the moment—which, again, is not at all a recipe to be happy in the aggregate.Ms. Magazine, 5d ago
An "unlearning" technique consists in deleting the data used to train a model, such as images, in order to preserve their confidentiality. This technique can be used, for example, to protect the sovereignty of an algorithm in the event of its export, theft or loss. Take the example of a drone equipped with AI: it must be able to recognize any enemy aircraft as a potential threat; on the other hand, the model of the aircraft from its own army would have to be learned to be identified as friendly, and then would have to be erased by a technique known as unlearning. In this way, even if the drone were to be stolen or lost, the sensitive aircraft data contained in the AI model could not be extracted for malicious purposes. However, the Friendly Hackers team from Thales managed to re-identify the data that was supposed to have been erased from the model, thereby overriding the unlearning process. Exercises like this help to assess the vulnerability of training data and trained models, which are valuable tools and can deliver outstanding performance but also represent new attack vectors for the armed forces. An attack on training data or trained models could have catastrophic consequences in a military context, where this type of information could give an adversary the upper hand. Risks include model theft, theft of the data used to recognise military hardware or other features in a theatre of operations, and injection of malware and backdoors to impair the operation of the system using the AI. While AI in general, and generative AI in particular, offers significant operational benefits and provides military personnel with intensively trained decision support tools to reduce their cognitive burden, the national defence community needs to address new threats to this technology as a matter of priority.tmcnet.com, 11d ago
To be clear, AI safety is a really important field, and, were it to be actually practiced by corporate America, that would be one thing. That said, the version of it that existed at OpenAI—arguably one of the companies that has done the most to pursue a “safety” oriented model—doesn’t seem to have been much of a match for the realpolitik machinations of the tech industry. In even more frank terms, the folks who were supposed to be defending us from runaway AI (i.e., the board members)—the ones who were ordained with responsible stewardship over this powerful technology—don’t seem to have known what they were doing. They don’t seem to have understood that Sam had all the industry connections, the friends in high places, was well-liked, and that moving against him in a world where that kind of social capital is everything amounted to career suicide. If you come at the king, you best not miss.Gizmodo, 11d ago
But what about genetics? Will these tests be used to target citizens by a “precrime” bureau? U.S. laws do not protect the genetic privacy of its citizens. And while we may believe that HIPAA or the Fourth Amendment protects medical information, it absolutely does not. In fact, HIPAA’s law enforcement exception makes all of our medical records available to the authorities. In the pre-HIPAA days, records were on paper and locked in an office. The police had to get a specific warrant to access them. Not long ago, the DEA was found to be trolling through thousands of cloud-based medical records, fishing for targets. How many patients will be willing to tell their physician about their drug problem when they know that federal agents will be reading through their charts? Worse yet, the government has developed algorithms and AI-based platforms to analyze medical and prescription records to find patients and physicians to target and prosecute.KevinMD.com, 7d ago

Latest

Here you go, insider trading robot:We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.John Lothian News, 4d ago
A year later and I'm not obsolete, and few journalists like me have lost their jobs as a direct result of ChatGPT or AI in general (though publishers have tried and failed, spectacularly in some cases, to use it to replace writers). I'm not arguing that some in media haven't lost their jobs or been deprived of work as a result of the rise of ChatGPT and its imitators, but direct cause and effect on a significant scale is not there yet.TechRadar, 4d ago
In these ways, the grey market augments an investor’s view of new bond deals coming to market. Yet, the data need to be considered in parallel with deal data coming directly from syndicates via official sources. Bringing grey market data into the decision-making process can be complicated because grey market activity takes place on informal channels such as Instant Bloomberg (IB) Chat or Liquidnet’s data stream for new issue trading. In the case of chats, the data are unstructured and, thus, a challenge to aggregate, normalize and otherwise format to present alongside more traditional sources of deal data. Fortunately, AI and related advanced data parsing techniques can come into play to efficiently present grey market data alongside structured data from formal sources.CoinGenius, 4d ago
The real problem is that consumers themselves are the ones on the hook. Legislation can keep open-source LLMs in check because their models grow from publicly available data, but won't have the reach to regulate ones whose growth depends on data collected privately, especially as the technology balloons across the industry. Whenever a customer uses an open-source LLM, their search history, in-app behavior, and identifying details are logged in service of further educating the AI. That quid pro quo isn't obvious to users, and this means best practices have to be created and enforced by the vendors themselves — a questionable proposition, at best.diginomica, 4d ago
...• Create: Corporate culture shifts. A company’s culture plays a crucial role in determining its success with GenAI. Companies that struggle with innovation and change will find it tough to keep pace. Does your company have a learning culture? That could be the key to success. Does your company foster a shared sense of responsibility and accountability? Without this shared sense, it is more likely to run afoul of the ethical risks associated with AI. Both questions involve cultural issues that boards should consider prompting their management teams to examine.DATAQUEST, 4d ago
By the end of 2024, our digital lives will be transformed, starting with our daily communications. Every individual at home or work will have an AI agent to manage their emails. Initially, it will filter, highlight, and summarize messages. As the agent analyzes our history, understands context, and incorporates our feedback, we will, over time, trust it to respond on our behalf using a defined set of guidelines and rules. When you want to really get someone’s attention, you may indicate this is an organic artisan email, written by me, a human!...Fast Company, 4d ago

Top

...(Which, for instance, seems true about humans, at least in some cases: If humans had the computational capacity, they would lie a lot more and calculate personal advantage a lot more. But since those are both computationally expensive, and therefore can be caught-out by other humans, the heuristic / value of "actually care about your friends", is competitive with "always be calculating your personal advantage."I expect this sort of thing to be less common with AI systems that can have much bigger "cranial capacity". But then again, I guess that at whatever level of brain size, there will be some problems for which it's too inefficient to do them the "proper" way, and for which comparatively simple heuristics / values work better. But maybe at high enough cognitive capability, you just have a flexible, fully-general process for evaluating the exact right level of approximation for solving any given problem, and the binary distinction between doing things the "proper" way and using comparatively simpler heuristics goes away. You just use whatever level of cognition makes sense in any given micro-situation.)...lesswrong.com, 16d ago
Policymakers should promote a liability regime based on the idea of algorithmic accountability—the principle that operators should employ a variety of controls to ensure they can verify an algorithm works in accordance with their intentions and they can identify and rectify harmful outcomes.[20] In practice, algorithmic accountability requires regulators to consider factors such as whether an operator acted with negligence or malicious intent and whether an operator took reasonable steps to minimize foreseeable risks from the use of its AI system. Operators who cause harm through negligence, such as by failing to consider the potential risks of their AI systems, should face more severe penalties than those acting in responsibly and in good faith.[21] An effective liability regime would incentivize AI operators to protect consumers from harm while also giving them the flexibility to manage their risk based on their knowledge of their products.itif.org, 26d ago
Of course, nobody has yet laid out specifically what financial information is actually deemed to be decision-useful in order to establish which data must be converted to a new format; that’s an unresolved first-order problem that leaves open the risk of unwarranted burdens on local governments during the initial implementation phase. All the oversight boards need to tread carefully.What proponents of the FDTA had in mind when they lobbied Congress, standardization on the extensible financial reporting language platform that has become commonplace in the private sector, was only a first-stage rocket in this new space race. The federal legislation did not give XBRL a monopoly per se, specifying only the use of “structured” data formats.Clearly, what most parties in the legislative process could never have anticipated last year was the possibility that existing financial reports using generally accepted accounting terminology may themselves already be computer-readable because of the new large language machine learning models that can read plain English typeset produced by word-processing software as well as alphanumeric images contained in the commonly used PDF documents that typically encapsulate governments’ audited annual financial reports. All of a sudden, “structured data” may ultimately prove to be little more than what we already have in place with conventional text documents that can be ingested by new AI systems with superior analytics already integrated with database utilities, without costly data entry hurdles.Governing, 20d ago

Latest

Working with AWS builds on an existing relationship for Humantic AI. “Our obsession with the customer is mutual. There is nothing else that we believe should come first. That’s why Humantic AI prioritizes the buyer experience… where other sales tools are focused on seller productivity.” said Amarpreet Kalkat, Founder and CEO of Humantic AI. “In this coming age of AI, we have a choice to make between becoming robotic and soulless, or selling with a soul. Technology can be a force-multiplier; but the human element remains key.”...TecHR, 4d ago
...“Our board considers the safety of students, faculty and staff to be our number one priority, and ZeroEyes is a critical addition to our security technologies,” said James Reina, superintendent of Greater Egg Harbor Regional School District. “AI, in combination with the 24-hour ZeroEyes Operations Center, is always monitoring our camera feeds for possible images of weapons, which removes the need for someone to be solely focused on the security cameras every time there are people in the building. We consider relationship building between our security guards, armed SROs and students to be critical to security, because if the kids know and trust us, they’ll tell us if anybody is threatening to bring a gun to school. If an SRO is locked in a room all day watching cameras, they’re not creating relationships with the community.”...securitysystemsnews.com, 4d ago
In an era where algorithms govern everything from the content we see online to the loans we apply for, the concept of “neutral“ decision-making is a myth. Moreover, algorithmic bias, an insidious force often lurking beneath our digital lives, shapes our opportunities, defines our interactions, and even molds our perceptions. Consequently, it is a phenomenon that demands our attention, for its impact is far-reaching, touching the lives of individuals and communities in ways we might not even realize.Emeritus Online Courses, 4d ago
Whether or not these strategies will work for the AI companies, maybe all they care about at the moment is the importance of owning full rights to their future-generated content and hiring as many creative writers as they can so they will not end up sued for copyright infringement by a lot of authors.POP!, 4d ago
Perhaps most importantly, leaders and educators need to resist the temptation to become overly focused on—or even panicked about—how AI might change teaching and learning. The dawn of ubiquitous AI should serve as a reminder that children still need to develop a deep foundation of knowledge to use these tools well, and that the best use of AI in traditional schools is to free up the time of educators to do more work directly with students. Outside of schools, AI can help cultivate the “weirder” ecosystem of educational options needed for a system of education that empowers families to access the educational opportunities their children need to thrive. When used thoughtfully, AI tools have the potential to move us closer to an education system that provides a more diverse range of experiences to meet the unique needs of every student.The Thomas B. Fordham Institute, 4d ago
Trust is deeply relational (Scheman 2020, Knudsen et al, 2021, Baier 1986), and has been understood in terms of the vulnerabilities inherent in relationships (Mayer et al 1995). Yet discussions about trust in AI systems often reveal a lack of understanding of the communities whose lives they touch — their particular vulnerabilities, and the power imbalances that further entrench them. Some populations are expected to simply put their trust in large AI systems. Yet those systems only need to prove themselves useful to the institutions deploying them, not trustworthy to the people enmeshed in their decisions (Angwin et. al 2016, O’Neill 2018; Ostherr et. al 2017). At the same time, researchers often stop upon asking whether we can trust algorithms, instead of extending the question of trust to the institutions feeding data into or deploying these algorithms.Data & Society, 4d ago

Top

...“The fight for control of OpenAI provides a valuable reminder of the volatility within this relatively immature branch of the digital industry and the danger that crucial decisions about how to safeguard artificial intelligence systems may be influenced by corporate power struggles. Huge amounts of money – and huge egos – are in play. Judgments about when unpredictable AI systems are safe to be released to the public should not be governed by these factors,” he said.the Guardian, 10d ago
Finally, civil rights are treated expansively, and the order sets out numerous requirements for relevant agencies to ensure that AI systems do not discriminate. The order also puts in motion exploration of future legislation that could be used to protect people from discrimination in areas including housing, credit, health care, criminal sentencing and government benefit programs.Centre for International Governance Innovation, 26d ago
Machine learning also promises to supercharge the cryptocracy’s responsibility-dodging imperative. Rather than passing the buck to other human beings, managerialists will simply be able to say that they are only following the suggestions emerging from the AI’s inscrutable layers of digital neurons; clearly, the AI cannot itself be responsible in any meaningful sense; and responsibility for its programming (and whatever goes wrong with it) is spread so thin between the teams of data scientists who curated its training data and supervised its training that none of them can be held responsible, either. A machine that programs itself, and whose inner workings are utterly illegible, is the ultimate in eliminating responsibility.Brownstone Institute, 15d ago
Protect intellectual property (IP): While the EO does not completely ignore the issue of protecting IP, it does not offer much peace of mind to those who know how this typically gets done. The government states the obvious: They will deal with IP theft in a manner consistent and applicable to current methods. The problem? We can’t even do that effectively now. The FBI estimates that China steals between $225 billion to $600 billion of American IP annually, and it only takes a simple internet search to uncover the number of arrests and cases the FBI has made concerning espionage by Chinese nationals in the United States under a visa program. This has been a persistent issue for American businesses, and the United States has made it clear that they are concerned with the implementation of IP law in China. We can monitor their progress in addressing IP issues (which has slowed recently), all we want. Meanwhile, American companies are losing billions to a competitive market. Foul play will cause America to fall behind in the AI arms race. To protect and better manage IP, we need to raise the barriers to entry from countries identified on the United States Trade Representative Priority Watch List, and combine that with enhanced investigations and penalties for espionage committed by nationals from countries on the list.SC Media, 25d ago
China’s Interim Measures for the Management of Generative Artificial Intelligence Services indicate that providers of generative AI services must carry out the training of the models in accordance with law, and must not infringe the intellectual property rights of third parties (Article 7).11A translation is available at https://www.chinalawtranslate.com/en/comparison-chart-of-current-vs-draft-rules-for-generative-ai/ Further guidance on how to avoid IP infringement12As reported in https://www.reuters.com/technology/china-proposes-blacklist-sources-used-train-generative-ai-models-2023-10-12/ is given in the consultation draft on Basic Security Requirements for Generative Artificial Intelligence Service (published in October 2023), to the extent that parties should have an IP management strategy, scrutinize all training data carefully and avoid using materials with a “problem” with their copyright for training. However, no position is taken on the very same questions before the American courts (such as whether the use of copyrighted materials for training is “fair use”).13This is the author’s own assessment after running the draft rules through Google Translate – the draft rules (only available in Chinese) can be accessed at: https://www.tc260.org.cn/upload/2023-10-11/1697008495851003865.pdf...The Singapore Law Gazette, 10d ago
Newton-Rex hopes that courts and lawmakers will help establish and enshrine the rights of creators in the post-generative AI world. “If the rights of creators are not upheld, I think it’s going to be a very rapid and detrimental thing for the creative industries,” he says. “And for individual creators who’ve ultimately been brought up believing and being told that their copyright might be worth something.” He would rather see an agreement between Silicon Valley firms and creators broached—perhaps with the nudge of regulation—that agrees on “how these technologies can be win-win, at least in the near term.”...Fast Company, 13d ago

Latest

Enhanced Security:As the metaverse becomes more and more popular, more people will utilize it. As more people transact in digital currencies to buy products from companies, there will also be a greater chance of fraud, identity theft, and attacks by harmful software. To protect users and the platform from any threats to their safety, AI will recognize any harmful behavior and take appropriate action to remove it.CoinGenius, 4d ago
While deploying AI solutions, it is important to proceed with caution given their evolving learning capabilities. The CAR strongly believes that there needs to be national oversight for AI applications, particularly as they relate to patient care. Best practices, rules, and regulations are necessary and should be monitored at a federal level. Despite the numerous benefits that AI could bring to patients and healthcare professionals, we must also remain cognizant of the potential risks associated with its application.Hospital News, 4d ago
How the principle of democratic participation is applied is crucial in this context. For humans to maintain control over AI, a certain level of transparency is required, and the public's use of the technology should be limited. Application programming interfaces (APIs) that allow programs like ChatGPT to be used by a third-party application (such as mental-health counseling apps) should be placed under strict control.techxplore.com, 4d ago

Latest

Generative AI types are good at emulating a wide range of personas and sticking to them. With the application of correct prompting techniques, the concentration or behavior of the product can be directed to get on a individual bias. From there, a product can evaluate a wide variety of risk situations by emulating various personas, supplying perception with diverse views. By making use of a selection of views, Generative AI can be leveraged to offer comprehensive risk assessments and are a lot a lot more able of getting neutral evaluators (via persona emulation) than a human would be. One particular can discussion a design with an opposing persona and make sure that scenarios being evaluated are completely red teamed.The Cyber Security News, 4d ago
On September 19, 2023, the Director of the Federal Trade Commission Bureau of Consumer Protection, Samuel Levine, delivered remarks that provided insight into the FTC’s ongoing strategy for regulating artificial intelligence (“AI”) during the National Advertising Division’s annual conference. Levine emphasized that the FTC is taking a more proactive approach to protect consumers from the harmful uses of AI, while ensuring the market remains fair, open, and competitive. Levine expressed the belief that self-regulation is not sufficient to address the regulation of AI. Levine also asserted that the FTC would continue to use its enforcement authority to challenge unfair or deceptive practices related to emerging AI products and push to expand its existing toolkit through proposed rules, such as imposing fines against those who use voice-cloning to defraud consumers. In his speech, he stated in-person, “I would say, at this stage, that we’re monitoring the market closely. I think the bigger thing we’re seeing now is claims around the use of AI. When we see more actual use of AI in direct interaction with consumers, we’ll be monitoring that closely to ensure that they’re not being deceived and that does not lead to harm otherwise.”...natlawreview.com, 4d ago
It is likely that challenges to the roll-out of AI powered systems will increase, especially from sectors that have tended to be unaffected by automation or other disruptive technologies in the past, such as script and song writers, along with copyright and IP infringement claims.electronicspecifier.com, 4d ago
Lead co-author professor Carl Frey, Dieter Schwarz associate professor of AI & Work at the Oxford Internet Institute and Director of the Oxford Martin Programme on the Future of Work, said, “The computer revolution and the rise of the Internet has connected talent from all around the world yet, rather than accelerating as many predicted, studies have shown that breakthrough innovation is in decline. Our paper provides an explanation for why this happens: while remote collaboration via the internet can bring together diverse pools of talent, it also makes it harder to fuse their ideas. Today, there is much talk about Artificial Intelligence supercharging innovation. Yet many predicted the same with the advent of the PC and the Internet. This should serve as a reminder there is unlikely to be a pure technological solution to our innovation problems.”...Lab Manager, 4d ago
It’s easy to see the future of AI through a dystopic lens. But with huge advances happening in AI, we are hurtling towards a reality where advertising need no longer be such a blunt tool based on audience assumptions, but instead a better optimised system that can not only understand, but also pre-empt the needs, wants, and desires of people. Businesses who capitalise on this opportunity whilst interrogating it’s limitations will undoubtedly reap rewards. But this will also cause huge ethical dilemmas for businesses as they tackle bias, misinformation, copyright, and privacy issues. In 2024, businesses must look at their own internal policies and ways of working around AI – they must set internal rules and responsibilities for AI use, perhaps establishing a dedicated AI ethics team, define the role of AI within the business, educate employees and partners, and maintain transparency wherever possible.advertisingweek.com, 4d ago
Until AI advances to the point where it can actually think for itself, understand, and exhibit something that more closely resembles human-like intelligence and common sense, it will remain a tool that can be used for good or bad, depending on the intentions of its human users or the unintended consequences of its design.RTInsights, 4d ago

Latest

...“We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.”...John Lothian News, 4d ago
...“XDR is one of the best cybersecurity investments available today,” continued Shaju. “It offers improved, consolidated visibility by ingesting data from siloed security solutions. It offers automated analysis that yields insights that would be unlikely to emerge from manual processes. The security function is therefore empowered to carry out faster, more productive investigations because the platform has already prioritised avenues of inquiry. Here we can see an end to alert fatigue and the beginning of a new era of high morale in the SOC and less risk across the board. and if that is not enough to get security leaders thinking in a new direction, imagine making a business case for XDR in which you can say with confidence that the entire security stack, now consolidated and simplified, will have a lower total cost of ownership.Intelligent CISO, 4d ago
A future in which all books, digitally unified, “become one massive tome,” as Baldwin puts it, sounds like the dystopias of Jorge Luis Borges’s “The Book of Sand” and “The Library of Babel.” In fact, it seems so extreme that Baldwin may dangle it in front of the reader as a provocation, intended merely to expose flaws in the current system. But he fails to consider a main objection to his argument: Is a “global bulletin board” even feasible, and if so, could improved search engines compensate for the elimination of editors, reviewers, librarians, and others who dominate the current process of selection? He does not address the problems of eliminating firewalls in countries like China, nor does he deal with the inadequacies of existing search engines. He barely mentions artificial intelligence, which could flood cyberspace with more misinformation than could ever be overcome by search engines of the future.The New York Review of Books, 4d ago
Human Oversight and Intervention: While AI can greatly enhance cybersecurity efforts, it's not a substitute for human expertise. Security teams should maintain an active role in monitoring and validating the decisions made by AI systems, as human intervention is essential in complex or novel situations. Additionally, security leaders must educate their teams on how to effectively use and understand AI systems to ensure that they are deployed correctly and that security teams can leverage the insights generated by these systems to make well-informed decisions.TechRadar, 5d ago
Due to the current situation, the Commission presented a possible compromise text on November 19, 2023. Although it maintained Parliament’s tiered approach, it also significantly softened the regulation. Firstly, the term “foundation model” no longer appears in the text. Instead, the Commission distinguishes between “general-purpose AI models” and “general-purpose AI systems” – according to the Commission’s definition, however, these terms continue to correspond to the terms “foundation model” and “general-purpose AI” introduced by the Parliament. According to the proposal, providers of general-purpose AI models should, among other things, be obliged to document the functionality of their AI models by means of so-called “model cards.” If the AI model poses a systemic risk – which should initially be measured in terms of computing power – they are subject to additional monitoring obligations. The text also contains an article according to which the Commission is to draw up non-binding codes of practice. This refers to practical guidelines, for example, on the implementation of model cards, on the basis of which players can ensure their compliance with the AI Act. However, possible sanctions are not mentioned.Tech Policy Press, 5d ago
...“However, while generative AI-powered LLMs are making life easier in numerous ways, we need to be acutely aware of their limitations. For a start, they’re not accurate: GPT-4 Turbo has the most up-to-date data since its inception, but still only contains world knowledge up to April 2023. These systems also hallucinate and have a clear tendency to deliver biased responses. The real concern with ChatGPT is the way these LLMs are presented. They give a ‘human-like’ interaction which inclines us to trust them more than we should. To stay safe navigating these models, we need to be much more skeptical with the data we are given. Employees need in-depth training to keep them up to date with the security risks posed by generative AI and also what its limitations are.”...TechRound, 5d ago

Latest

Personalization will be essential. This is not “Dear {{lead.First Name:default= }}” or even “I saw you went to school at Duke” personalization; it’s about being relevant to a recipient’s actual interests. We’ll also see an increased use of the human touch in email marketing. Emails from real people have higher open rates, especially when they come authentically from people the recipient knows or executives at your company. And as buyers become sensitized to poorly customized, AI-written emails, we will see the pendulum swing towards genuine human interactions.OpenView, 4d ago
These current conversations may serve as a conduit for cultivating ‘topic trust’ between nations in conflict. Perhaps the U.S. and China can agree that AI is a powerful tool that if not utilized properly, could have serious consequences. AI stands as a potent instrument capable of faster data processing, augmenting educational experiences, and disseminating information. However, similar to numerous technological innovations, this power is accompanied by the ability for instigating fear, obscurity, and disinformation. Consequently, a pertinent question arises: Can the United States and China effectively harness such technological prowess without precipitating mutual destruction?...Modern Diplomacy, 4d ago
The project, run by the BIS Innovation Hub’s Swiss centre working alongside the Swiss National Bank (SNB), was launched just over 12 months ago as an exploration in how to ‘improve’ CBDC privacy, cyber-resiliency and scalability through prototype development. Designing a CBDC involves ‘complex’ trade-offs between these three elements, BIS stated, explaining that – for example – privacy needs to be balanced against the need to counter money laundering, terrorism financing and other illicit payments; and that higher resiliency against cyber-attacks, especially from quantum computers, requires additional cryptography, which can slow payment processing.BIS Innovation Hub Swiss centre head Morten Bech said the project would ‘push central banks’ technological frontier’. He and colleagues’ subsequent work, which has also involved IBM as private-sector technology partner, sought to address the three features simultaneously: privacy – by enabling payer anonymity; cyber-security – by implementing ‘quantum-safe’ cryptography; and scalability – by testing the prototype’s ability to handle a growing number of transactions using payment data.Global Government Fintech, 4d ago

Top

Innovative problem-solving. While it has its flaws, the non-sentient AI we already use has been known to come up with creative solutions to human problems. It is now common for humans to ask ChatGPT for advice for everything from career development to relationship problems. Now, keep in mind that a sentient AI would be the most self-aware tech to ever exist. Not only would it have access to virtually all the information that has ever been present and analyze it at the drop of a hat, but it would also understand first-hand how human feelings work. It’s been suggested that sentient AI could tackle issues like world hunger and poverty because it would have both the technical understanding of such problems and the emotional intelligence needed to navigate human nuances.Coinspeaker, 14d ago
Li told me that the AI’s role really is designed to be “assistive” rather than to replace lawyers themselves. But, still, this stands in stark contrast to the United States where regulations in most states explicitly limit the amount of help that technology can provide in the legal system. Even legal aid companies in the U.S. are met with swift accusations of unauthorized practice of law when they try to help provide support to those who would be unrepresented (for reference, as much as 75% of civil cases in the U.S. have at least one party representing themselves). Or where companies that claim they are the world’s first Robot Lawyer get class actions lawsuits accusing them of being neither robots nor lawyers — and get smacked with unauthorized practice of law accusations for offering AI-powered airpods to guide people in traffic court.Above the Law, 20d ago
While recognizing and rewarding the significant investments by the major players, it will be important for regulators to foster an environment where access to models is provided on fair, reasonable and non-discriminatory terms. The biggest players have their own downstream offerings, as well as strategic investments and partnerships with other AI developers. These arrangements can bring efficiencies, new capabilities and enhanced consumer choice, but also provide opportunities to foreclose downstream competitors or lock in the supply of services to downstream operators on terms weighted in favor of the bigger player.TechRadar, 10d ago
A technology monitoring system. A global technology observatory has been already suggested by the Millennium Project[7], the Carnegie Council for Ethics in International Affairs[8] and other experts. This organization should be empowered to supervise AI research, evaluate high-risk AI systems, and grant ISO-like certifications to AI systems that comply with standards. It should track technological progress and employ foresight methods to anticipate future challenges, particularly as AGI looms on the horizon. Such an entity, perhaps aptly named the International Science and Technology Organization (ISTO), building on the work done by ISO/IEC and the IEEE on ad hoc standards, could eventually extend its purview beyond AI, encapsulating fields like synthetic biology and cognitive science. Avoiding the usual challenges—dissent over nuances, apprehensions of national sovereignty, and the intricate dance of geopolitics, could be done through the emergence of such an organism from the mentioned already extant standardization organizations. However, the EU, with its rich legacy, is perfectly poised to champion this cause, in close collaboration with the UN to expedite its realization.Modern Diplomacy, 20d ago
Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.lesswrong.com, 26d ago
As the Ai Pin and similar devices gain traction with consumers, they could pave the way for more innovations in remote collaboration and productivity. However, there are issues raised related to privacy, data security, and the social acceptability of wearing such devices in professional settings, and these will need to be addressed by workplaces that decide to incorporate the device.Allwork.Space, 23d ago

Latest

If the recent tussle over the firing and re-hiring of OpenAI CEO Sam Altman has taught us anything, it’s that there will always be an inherent tension between what AI promises – its inherent value – and the risks of a technology potentially “too disruptive for its own good”. Over the next year we can expect more OpenAI style fall-outs involving native AI companies, regulators, governments and other stakeholders as society grapples with the implications of what is essentially a live experiment. Companies engaging at any level with generative AI should expect and even encourage debate around how this dichotomy can be handled.Compare the Cloud, 4d ago
In Beltran de Heredia's opinion, the field in which we are most likely to see the first attempts to influence human behaviour through AI is that of work, more specifically occupational health. He argues that a number of intrusive technologies are currently in use. These include devices that monitor bus drivers to detect microsleep or electroencephalography (EEG) sensors used by employers to monitor employees' brainwaves for stress and attention levels while at work. "It's hard to predict the future but, if we don't restrict such intrusive technologies while they're still at the earliest stages of development, the most likely scenario is that they'll keep improving and spreading their tendrils in the name of productivity."...newswise.com, 5d ago
Generative AI chat applications have captured the public’s imagination and helped people understand what is possible, but there are still barriers that prevent people from using these solutions at work. Specifically, these chat applications do not know an organisation’s business, data, customers, operations, or employees. Additionally, these solutions were not initially built with the security and privacy features that organisations need for employees to safely use them in their day-to-day work. This has led to companies adding these features to their assistants after they were built, which does not work as well as incorporating security into the assistant’s fundamental design.technologymagazine.com, 5d ago
Consent, privacy, and responsible AI use are only a few of the concerns that should be included in ethical considerations. Developers and researchers ought to follow guidelines that put the rights and welfare of the people whose likenesses are being replicated first. Collaborating across the industry can help exchange best practises and forge a shared commitment to the responsible development and application of deepfake technology.MarTech Series, 5d ago
Establish specifications: Building on their objectives, kids must know how to control the parameters that govern their interactions with AI tools. Similar to the laws of a game like chess, parameters are precise instructions or rules that specify how the AI should react. Give your child a chessboard and don’t explain to them how the pieces move, and they will probably become frustrated and move the pieces randomly. The same is true when utilising AI models like ChatGPT; in the absence of explicit parameters, the results may be erroneous and stochastic. Regardless of how the world changes, teaching kids this ability will make them more productive users of technology and promote disciplined thinking, which is a need that will always be important.CXOToday.com, 5d ago
The AWS CEO added that the new AI assistant was also designed to understand "rock-solid security and privacy," which were critical in determining permissions to access data that vary across multiple roles and hierarchies within companies. "If a user doesn't have permission to access something without Q, they cannot access it with Q either," said Selipsky, adding that he truly believed this is going to be transformative. "We want lots of different kinds of people who do different kinds of work to benefit from Amazon Q," he said.cnbctv18.com, 5d ago

Top

It's been interesting that the focus, how it's been stated as sort of broad AI models as agents, as tools, but it's perhaps unsurprising that even though large language models weren't called up specifically in the remit, actually that's where the attention has been. I think that the areas which have been particularly challenging are around control, if that's the right word, of data in terms of know that these models, unlike more narrow applications of AI in medicine, these models have been generated on enormous data sets that aren't on a traditional consent, opt-in type model. There's been really interesting conversations, thoughtful analysis on what this means for patient autonomy. What does the right to be forgotten, if we look at this in a in a more sort of GDPR context, what does that look like in the context of large language model? I think also what it'll be trying to holding a touch of, is the opportunity to improve patient care. And actually a really strong theme has been around patient as the leaders. This is not something that's being done to patients or on behalf patients, this is all of us together as that wider public, as humanity, with different roles, some as patients, some as carers, some as health practitioners, some as engineers, trying to find a way together.The Health Foundation, 7d ago
The GPT-4 was tested using a public Turing test on the internet by a group of researchers from UCSD. The best performing GPT-4 prompt was successful in 41% of games, which was better than the baselines given by ELIZA (27%), GPT-3.5 (14%), and random chance (63%), but it still needs to be quite there. The results of the Turing Test showed that participants judged primarily on language style (35% of the total) and social-emotional qualities (27%). Neither participants’ education nor their prior experience with LLMs predicted their ability to spot the deceit, demonstrating that even persons who are well-versed in such matters may be vulnerable to trickery. While the Turing Test has been widely criticized for its shortcomings as a measure of intellect, two researchers from the San Diego (University of California) maintain that it remains useful as a gauge of spontaneous communication and deceit. They have artificial intelligence models that can pass as humans, which might have far-reaching social effects. Thus, they examine the efficacy of various methodologies and criteria for determining human likeness.MarkTechPost, 24d ago
Third, everyone who has any role in the deployment of AI needs to be thinking about the ethical and even moral implications of the technology. Profit alone cannot be the only factor we optimize our companies for, or we’re going to create a lot of misery in the world that will, without question, end in bloodshed. That’s been the tale of history for millennia – make people miserable enough, and eventually they rise up against those in power. How do you do this? One of the first lessons you learn when you start a business is to do things that don’t scale. Do things that surprise and delight customers, do things that make plenty of human sense but not necessarily business sense. As your business grows, you do less and less of that because you’re stretched for time and resources. Well, if AI frees up a whole bunch of people and increases your profits, guess what you can do? That’s right – keep the humans around and have them do more of those things that don’t scale.Christopher S. Penn - Marketing Data Science Keynote Speaker, 8d ago