newVanta, Cacioppo says, is “pro-responsible use” rather than pro-regulation, highlighting one of the greatest trust issues surrounding AI: Many companies developing AI tools believe they’re trustworthy enough to self-regulate. — Fortune, 9h ago
newWhile some hesitation and a healthy dose of skepticism around AI are probably smart, there is an application for AI far from the headlines, that has the potential to drive seismic shifts in not just how we work, but how we positively impact the community we live in and communities around the world. Already AI is being used to find solutions to complex challenges—whether that be using large volumes of weather data to help farmers in Kenya understand weather patterns and better manage their livestock, or using satellite images to stop deforestation before it begins. AI is also being used to improve health care and patient outcomes through advances in precision medicine, genetics-based solutions, and drug discovery and development. — Fortune, 9h ago
newAmidst the recent hype surrounding generative AI, experts like Ken Mugrage, Principle Technologist, Office of the CTO at Thoughtworks, are cautioning against overlooking more immediate concerns such as sustainability and bias while also recognizing the genuine value of these systems. Rather than viewing generative AI as all-encompassing chatbots, experts envision it as a class of tools designed for specific niches, offering innovative ways to navigate specialized information domains. This perspective—outlined in Mugrage’s recent piece published for MIT Technology Review— acknowledges that generative AI’s true significance lies in its capacity to interact with vast and complex datasets. — CDInsights, 17h ago
new..."The rapid rate of innovation fueled by the emergence of GenAI capabilities has generated fresh challenges for cybersecurity teams, raising new questions over compliance, trustworthiness, and security of their AI/ML environments," said Marcus Bartram, General Partner at Telstra Ventures. "Cranium stands at the forefront of AI security and trust software, empowering organizations to navigate the crowded cybersecurity industry with its groundbreaking product and pioneering innovations addressing enterprises' urgent needs grappling with AI regulation, compliance, and security frameworks. We're thrilled to invest in the team at Cranium and are confident in the tremendous impact they're poised to make."... — darkreading.com, 19h ago
newTel Aviv – November 20, 2023 –Lasso Security, innovators in Large Language Models (LLMs) cybersecurity, today announced a $6 million seed round led byEntrée Capital with the participation ofSamsung Next. Every few years there's a major technological revolution. In the 90s, it was the internet; in the 2010s, it was the cloud and most recently, we have Generative AI. The revolution that Gen AI and LLMs technologies have brought to organizations has been significantly embraced by hundreds of millions of end-users and hundreds of thousands of organizations. This swift adoption also presents a pressing issue — the significant cybersecurity challenges posed by LLMs, including data exposure, security and compliance risks. Lasso is setting the standard for LLM cybersecurity by safeguarding every LLM touchpoint, ensuring unrivaled, comprehensive protection for businesses leveraging Gen AI and other large language model technologies. The company intends to utilize the funds to expand its team and further enhance its offerings. — darkreading.com, 19h ago
newThe conference kicked off with keynotes and comments from Black Hat and DEF CON founder Jeff Moss and Azeria Labs founder Maria Markstedter, who explored the future of AI risk — which includes a raft of new technical, business, and policy challenges to navigate. The show features key briefings on research that has uncovered emerging new threats stemming from the use of AI systems, including flaws in generative AI that makes them prone to compromise and manipulation, AI-enhanced social engineering attacks, and how easily AI training data can be poisoned to impact the reliability of the ML models that depend on it. The latter, presented today by Will Pearce, AI red team lead for Nvidia, features research for which Anderson was a collaborator. He says the study shows that most training data is scoured from online sources that are easy to manipulate. — darkreading.com, 19h ago
Latest
newThe cautionary tale serves as a reminder of historical resistance to technological advancements and urges a nuanced approach to AI regulation. While acknowledging the challenges posed by AI, Al Olama advocates for proactive governance to harness the benefits of technology without stifling its potential. In an era marked by the birth of advanced AI models like ChatGPT, the discourse surrounding responsible AI deployment and regulation remains central to navigating the evolving technological landscape. — Wonderful Engineering, 1d ago
newBioethics principles must be applied for medical AI to gain clinician and patient trust. One of the biggest issues is bridging the gap between promising ethical tools and actual applications. The authors propose embedding ethical algorithms into AI systems. However, regardless of which approaches are taken to embed these algorithms, the paper suggests that developers and clinicians should factor in the nuances of different medical cases rather than establishing universal ethics guidelines. — Montreal AI Ethics Institute, 1d ago
newTechnical research on AI deception is also necessary. Two primary areas warrant attention: detection and prevention. For detection, existing methods are still in their infancy and range from examining external behaviors for inconsistencies to probing internal representations of AI systems. More robust tools are needed, and targeted research funding could accelerate their development. On the prevention side, we must develop techniques for making AI systems inherently less deceptive and more honest. This could involve careful pre-training, fine-tuning, or manipulation of a model’s internal states. Both research directions will be necessary to accurately assess and mitigate the threat of AI deception. For more discussion, please see our full paper, AI Deception: A Survey of Examples, Risks, and Potential Solutions. And if you’d like more frequent updates on AI deception and other related topics, please consider subscribing to the AI Safety Newsletter. — Montreal AI Ethics Institute, 1d ago
As a new technology, generative AI is unique due to the sheer pace of innovation and the speed at which it has been democratized and made available to all. According to one study, around 60% of workers currently use or plan to use generative AI while performing their day-to-day tasks. However, while myriad benefits of generative AI are undeniable, they come at a cost. Its broad-scale adoption has brought about challenges around ethical use, data privacy and security. As AI models become more sophisticated — which they will, at pace — the potential for misuse or unintended consequences grows, emphasizing the need for robust oversight and a proactive approach to governance. — Security Boulevard, 13d ago
Our traditional notions of intellectual property are not prepared for artificial intelligence. As every company navigates policies around the use of generative AI, even bigger debates are raging. Who owns the rights to AI-generated content? How can we harmonize intellectual property laws across different countries? In this panel, experts from across the intellectual property community reviewed these and other challenges to bring order to the evolving chaos. See the lively discussion on the operational, legal and ethical aspects of intellectual property in the context of AI, including generative AI and other emerging technologies. — iapp.org, 3d ago
Crowdsourced data and services have reshaped the landscape of data collection, analysis, and decision-making. Their lawful applications in fields like safety of navigation, mapping, and open-source data mining highlight their significance in enhancing national security and intelligence operations. However, addressing privacy concerns and bridging policy gaps are essential for the responsible and ethical utilization of crowdsourced data. As technology advances, the synergistic relationship between crowdsourced data and AI continues to evolve, promising even greater insights and advantages for decision-makers across various domains, especially when supported by AI platforms that provide universal data collaboration across widely diverse data sources and software systems. Moreover, commercial crowdsourced data companies demonstrate how crowdsourced data can drive business growth and innovation by providing accurate and timely insights. Ultimately, the fusion of human collaboration, technology, and data holds the potential to redefine how we understand, analyze, and respond to complex challenges in the modern world. — natlawreview.com, 20d ago
Generative AI is a growing area with a lot of potential. It can automate tasks and provide new insights in many different fields. As powerful as it is, the technology also brings many ethical and practical challenges, particularly in highly regulated sectors like finance. In this blog, we dive into the considerations surrounding the responsible deployment and use of generative AI, its future developments, and its role in financial compliance and crime prevention. — lucinity.com, 19d ago
A survey of 119 CEOs reveals that 42% believe AI will significantly impact humanity in the coming decade. 72% of executives express concerns that ethical dilemmas might hinder their organizations from utilizing generative AI's benefits. This is the moment for vision, courage, and unwavering ethics to reign, ensuring that GenAI propels us toward a brighter, more prosperous, and ethically sound future. — hackernoon.com, 4d ago
Ethical concerns arise as AI systems lack transparency, making it challenging to understand their decision-making. Facial recognition technology, for example, has faced criticism for privacy infringement and misidentification issues. AI’s potential for surveillance poses a significant threat to personal freedoms, raising questions about surveillance ethics. Bias in AI algorithms, seen in hiring processes, has led to calls for responsible AI practices to mitigate discrimination. Deepfake technology presents ethical dilemmas as it can be used to create misleading content or impersonate individuals. — globaltechcouncil.org, 28d ago
newAs the Air Force and Space Force embark on a journey to fully integrate artificial intelligence into their military strategies, the question lingers: How can the U.S. stay ahead in the AI arms race while ensuring responsible and ethical use? The imperative for innovation, human-machine teaming, and adapting to the dynamic landscape of modern warfare remains at the forefront. The Reagan National Defense Forum’s focus on “10 Years of Promoting Peace Through Strength” underscores the ongoing commitment to shaping policies that strengthen America’s national defense in the face of evolving global threats. How will this commitment manifest in the future landscape of AI-driven warfare?... — BitcoinEthereumNews.com, 1d ago
newThe findings from this study are crucial for academia and go well beyond that, touching the critical realm of AI Ethics and safety. The study sheds light on the Confidence-Competence Gap, highlighting the risks involved in relying solely on the self-assessed confidence of LLMs, especially in critical applications such as healthcare, the legal system, and emergency response. Trusting these AI systems without scrutiny can lead to severe consequences, as we learned from the study that LLMs make mistakes and still stay confident, which presents us with significant challenges in critical applications. Although the study offers a broader perspective, it suggests that we dive deeper into how AI performs in specific domains with critical applications. By doing so, we can enhance the reliability and fairness of AI when it comes to aiding us in critical decision-making. This study underscores the need for more focused research in these specific domains. This is crucial for advancing AI safety and reducing biases in AI-driven decision-making processes, fostering a more responsible and ethically grounded integration of AI in real-world scenarios. — Montreal AI Ethics Institute, 2d ago
newWhile AI does not replace human oversight and management, it allows BMETs and other HTM professionals to work more efficiently and focus on their core skillset and responsibilities. The HTM field should embrace developing AI opportunities, while being mindful of the potential risks and concerns that come with the implementation of these emerging technologies. — Healthcare Business Today, 2d ago
newRalph Ranalli (Intro): Welcome to the Harvard Kennedy School PolicyCast. I’m your host, Ralph Ranalli. When ChatGPT and other generative AI tools were released to the public late last year, it was as if someone had opened the floodgates on a thousand urgent questions that just weeks before had mostly preoccupied academics, futurists, and science fiction writers. Now those questions are being asked by many of us—teachers, students, parents, politicians, bureaucrats, citizens, businesspeople, and workers. What can it do for us? What will it do to us? Will it take our jobs? How do we use it in a way that’s both ethical and legal? And will it help or hurt our already-distressed democracy? Thankfully, my guest today, Kennedy School Lecturer in Public Policy Bruce Schneier has already been thinking a lot about those questions, particularly the last one. Schneier, a public interest technologist, cryptographer, and internationally-known internet security specialist whose newsletter and blog are read by a quarter million people, says that AI’s inexorable march into our lives and into our politics is likely to start with small changes like AI’s helping write policy and legislation. The future, however, could hold possibilities that we have a hard time wrapping our current minds around—like AIs creating political parties or autonomously fundraising and generating profits to back political parties or causes. Overall, like a lot of other things, it’s likely to be a mixed bag of the good and the bad. The important thing, he says, is to using regulation and other tools to make sure that AIs are working for us—and even paying us for the privilege and not just for Big Tech companies—a hard lesson we’ve already learned through our experience with social media. He joins me today. — harvard.edu, 2d ago
newOne notable side effect of the advances of generative AI is the evolution of fraud in the form of deepfakes. These will pose a growing threat to biometric processes, such as those used in identity verification. This includes both presentation and injection attacks. According to market experts, presentation attacks using deepfakes are roughly 10 to 100 times more common than injection attacks. In 2024, it will become ever more important for banks and financial service providers to rely on remote identity verification processes that validate the user’s face biometrics and perform liveness checks to detect and prevent presentation attacks and tackle the rising number of fraud attempts. Learning more about the user’s device and where it comes from, securing the communication between camera and application, and the usage of NFC (near field communication) technology will all start to play bigger roles in the fight against fraud. — Financial IT, 2d ago
newIn conclusion, integrating AI into military operations is a defining feature of contemporary warfare, offering unprecedented intelligence and combat strategy capabilities. However, this technological advancement brings a host of ethical dilemmas and responsibilities. Balancing the benefits of AI in warfare with the need to protect civilian lives and maintain ethical standards remains a critical challenge for militaries and policymakers worldwide. As AI continues to evolve, its role on the battlefield will likely expand, necessitating ongoing dialogue and international cooperation to ensure its responsible and ethical use. — BitcoinEthereumNews.com, 2d ago
He then discussed the challenges of developing ethical frameworks for different forms of AI and AI-like technology. But there’s a major issue. During the keynote, Nick argues that many of the ethical principles that we currently use, such as the principle of utility, may not be applicable to AI programs. The reason is that AI programs may have different goals and values than humans, and they may not be susceptible to the same kinds of harm and benefits as humans. — opendatascience.com, 21d ago
While this new wave of AI comes with several concerns for organizations, including data privacy issues and challenges regarding biases, we see potential for trustworthy AI frameworks that manage such risks safely. Human judgement is key in that regard. — spglobal.com, 5d ago
newThe AI floodgates opened in 2023, but the next year may bring a slowdown. AI development is likely to meet technical limitations and encounter infrastructural hurdles such as chip manufacturing and server capacity. Simultaneously, AI regulation is likely to be on the way.This slowdown should give space for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful.ChatGPT and other generative AI systems will settle into people’s workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools.But the outlook for 2024 isn’t completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn’t likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content.Political misinformation spread across social media in 2016 as well as in 2020, and it is virtually certain that generative AI will be used to continue those efforts in 2024. Even outside social media, conversations with ChatGPT and similar products can be sources of misinformation on their own.As a result, another lesson that everyone – users of ChatGPT or not – will have to learn in the blockbuster technology’s second year is to be vigilant when it comes to digital media of all kinds.Tim Gorichanaz, Assistant Teaching Professor of Information Science, Drexel UniversityThis article is republished from The Conversation under a Creative Commons license. Read the original article. — GovTech, 2d ago
newThe cautious yet optimistic adoption of these technologies by cities like Boston and states like New Jersey and California signals a significant shift in the public-sector landscape.The journey from skepticism to the beginnings of strategic implementation reflects a growing recognition of the transformative potential of AI for public good. From enhancing public engagement through sentiment analysis and accessibility to optimizing government operations and cybersecurity, generative AI is not just an auxiliary tool but a catalyst for a more efficient, inclusive and responsive government.However, this journey is not without its challenges. The need for transparent and accountable technologies, responsible usage, constant vigilance against potential misuse, and the importance of maintaining a human-centric approach in policymaking are reminders that technology is a tool to augment human capabilities, not replace them.With responsible experimentation and a commitment to continuous learning, governments can harness the power of generative AI to reshape how they deliver public services. The future of governance is being rewritten, and it's up to us to ensure that this story is one of progress, inclusivity and enhanced public welfare.Beth Simone Noveck is a professor at Northeastern University, where she directs the Burnes Center for Social Change and its partner projects, the GovLab and InnovateUS. She is core faculty at the Institute for Experiential AI. Beth also serves as chief innovation officer for the state of New Jersey. Beth’s work focuses on using AI to reimagine participatory democracy and strengthen governance, and she has spent her career helping institutions incorporate more participatory and open ways of working. — GovTech, 2d ago
newKAREN HAO: Regulatory capture is a huge issue and it is definitely a big concern of mine in that and and one of, one of the reasons why we would naturally see regulatory capture in this moment, regardless of whether it's OpenAI at the helm, is that there is a particular narrative that in order to understand and shepherd AI development, you have to be an AI expert. And I think that that narrative is completely wrong because if AI affects you, you have a say, and actually stories about people who are impacted in unexpected ways by these technologies is, as a reporter, that is one of the most enlightening types of stories for me in understanding how a technology should be developed, is seeing how it falls apart and and seeing when sort of- things that were unanticipated end up happening in the real world. And in OpenAI's case in particular, they have also kind of tried to solidify this narrative of expertise by also saying, "Well we're the only ones that see our models," without necessarily acknowledging that it's in part because they won't let anyone else see them. And because regulators, because it is important for regulators to engage with the developers of these technologies, sort of by default, they just seek out OpenAI's opinions on what they should do or Google's opinions on what they should do; Meta's opinions on what they should do. And that's when regulatory capture happens is there's already a baseline belief that only people with expertise should participate. And then on top of that, companies are like trying to entrench this and fuel this narrative and then policymakers buy into the narrative. And that's how you end up with like Sam Altman on this global tour seeing all the heads of state and the heads of state not necessarily creating the same kind of grand welcome for other stakeholders within this AI debate. You're right also that there are concerns around how effective that regulation can be. I do think what I'm talking about with like having more people speak up about how AI affects them and their concerns about the technology is one antidote to ineffective regulation because the more that policymakers can understand the literal real-world examples of the technology interfacing with people, the more that they can design regulation that is effective. But the other thing is I think we focus a lot on kind of federal-level regulation and we focus a lot on in our government, international regulation, but there's a lot that happens at the local level as well, like school boards. Schools are thinking about how to incorporate AI in into the classroom right now. And as a parent, as a teacher, like you should have a say in that you are the one, if you're a teacher, you're the one that's using this technology and you're the one that knows your students. So you will be the most informed in that kind of environment to say whether or not you think this technology is going to help in kind of the general mission to educate your kids. It's also like police departments are acquiring AI technologies and people within cities should have a say as to having more transparency around the acquisition of these, these technologies and whether or not that should be acquired at all. And I think in these local contexts, sometimes these contexts, actually regulation is more effective because it is more localized, it is more bespoke to that context, and it also moves faster. So I think that is sort of an important dimension to add is when I say "Speak up and voice your opinions," it's not just to the federal agencies, it's not just to the Congress people actually, just like within your city, within your town, within your school, within your workplace, these are all avenues in which you can kind of speak up and help shepherd the development, adoption and application of the technology. — Big Think, 2d ago
newAs data and AI become increasingly integrated into the automotive industry, challenges arise. Privacy concerns, data security, and ethical considerations regarding AI decisions require careful management. The industry must strike a balance between innovation and responsibility. — Zephyrnet, 2d ago
newDr. Charmaine B. Dean, vice-president, Research and International also pointed out the hostility faced by many science and technology scholars for the purpose of silencing them. "During the pandemic we saw a troubling backlash against scientific expertise. With the rise of generative AI, questions are also being raised about the impact of fast-moving technological advancements on society," she said. "The panel discussed this climate of anxiety and mistrust, the effects of social media, and how critical it is to foster inclusive and respectful discourse."... — Waterloo News, 2d ago
There is also the ethics angle to consider – are devs being anchored well in the ethical problems of AI? Are white-collar workers aware of the same ESG pitfalls? This isn’t just for the sake of nicety. As Stratton reminds ERP Today, 2024 will see the regulatory environment for AI continue to evolve, starting with the EU AI Act. — ERP Today, 3d ago
Policymakers need to be cautious regarding tech companies' significant political capital. It is vital to involve them in regulatory discussions, but it would be naive to trust these powerful lobbyists to police themselves. AI is making its way into the fabric of the economy, informing financial investments, underpinning national healthcare and social services and influencing our entertainment preferences. So, whomever sets the dominant regulatory framework also has the ability to shift the global balance of power. Important issues remain unaddressed. In the case of job automation, for instance, conventional wisdom would suggest that digital apprenticeships and other forms of retraining will transform the workforce into data scientists and AI programmers. But many highly skilled people may not be interested in software development. As the world tackles the risks and opportunities posed by AI, there are positive steps we can take to ensure the responsible development and use of this technology. To support innovation, newly developed AI systems could start off in the high-risk category - as defined by the EU AI Act - and be demoted to lower risk categories as we explore their effects. Policymakers could also learn from highly regulated industries, such as drug and nuclear. They are not directly analogous to AI, but many of the quality standards and operational procedures governing these safety-critical areas of the economy could offer useful insight. — Business Insider, 19d ago
The DTH-Lab is committed to the robust governance of digital technologies, with a key focus of this discussion on the ethical and governance challenges associated with generative AI in healthcare. Setting the stage for more responsible AI development while aligning with global visions for improved health outcomes, participants of Digital Health Week spanned different age groups and backgrounds but shared a universal curiosity towards this topic with nearly 200 people registered for the event. Mirroring the cross-generational appeal and significance of generative AI in healthcare, attendees expressed a spectrum of sentiments, from excitement and optimism to concerns about potential risks. — Growing up 2030 in a digital world - Govering Health Futures, 14d ago
As Generative AI’s capabilities continue to expand, so too do the ethical dilemmas surrounding its use. The power to simulate nearly indistinguishable cyber-attacks or create convincing phishing emails is not without its perils. There’s a palpable fear that these tools, in the wrong hands, could inflict substantial damage. — Cyber Defense Magazine, 27d ago
...“Recent advances in both predictive and generative AI technologies are opening up limitless innovative applications. However, these technologies can expose SMEs to a minefield of ethical considerations and challenges that can have business and reputational impact. — Dynamic Business, 12d ago
Training individuals to use AI ethically is essential in order to ensure responsible and unbiased deployment of this powerful technology. Ethical AI training equips individuals and organizations with the knowledge and skills to navigate the challenges and identify risks that arise when working with AI systems. It ultimately boils down to mitigating risk – just like anti-bribery and corruption policies, as well as the importance of data privacy and security. By providing individuals with the necessary training, we can foster a culture of ethical AI use, where technology is harnessed for the benefit of all while mitigating potential harm and ensuring equitable outcomes. — RTInsights, 3d ago
Chris Probert, Partner, and Head of Capco’s UK Data Practice, said: “Delivering focused, actionable, and effective data strategies is at the heart of what we do. We also ensure that our clients can deliver the best possible outcomes for their customers. 2023 has been a revolutionary year for data and has seen our clients recognise the game-changing nature of generative AI, while also grappling with the associated ethical considerations and implementation challenges. Ensuring effective data governance is key, with AI-related risks identified and mitigated via a control framework.”... — capco.com, 4d ago
Latest
...“This release is a game changer for protocol design committees and portfolio optimization teams. Today, these teams facilitate a painstaking negotiation process between the medical, commercial, operational, and regulatory stakeholders, that all need to weigh-in on the pros and cons of each protocol and portfolio decision.” said Orr Inbar, Co-Founder and CEO of QuantHealth. “With the power of QuantHealth’s generative AI, and now further enhanced by monte-carlo workflow, the Katina platform fires off thousands of protocols at once and allows each stakeholder to evaluate the simulations based on what matters to them most- be it endpoint success, ability to recruit patients, likelihood of getting an approval, competitive performance, etc. This high-throughput, holistic approach ensures that when it comes to protocol selection and development strategy, no stone is left unturned, and all voices in the room are heard.”... — hitconsultant.net, 3d ago
Although the UAE has made commendable progress in AI-powered sustainability, there are still challenges to address. Ensuring the ethical use of AI, addressing concerns about data privacy, and ensuring that technological advancements benefit all segments of society are crucial considerations. The UAE’s forward-thinking approach to AI-powered sustainability represents a significant stride towards a more environmental-friendly future. The UAE is setting a global example by harnessing AI’s potential in energy, waste management, agriculture, and urban planning. As statistics continue to highlight the positive impact of these initiatives, the UAE’s pursuit of sustainable development through AI remains both promising and inspirational. — TahawulTech.com, 3d ago
As we embrace this new era of medical innovation, it’s crucial to recognize that the journey with AI in health care is just beginning. The potential for AI to improve patient outcomes, enhance diagnostic accuracy, and streamline health care operations is immense. Yet, alongside these opportunities, we must vigilantly address challenges such as AI hallucinations, data security, and ethical considerations. By fostering a culture of innovation, collaboration, and continuous learning, we can unlock the full promise of AI in health care. The future is bright, and together, we can chart a course toward a more efficient, effective, and patient-centered health care system. — KevinMD.com, 3d ago
The rapid advancement of AI brings both promise as well as peril. Concerns encompass AI’s lack of transparency, potential job losses, susceptibility to social manipulation, privacy issues, bias perpetuation, socioeconomic inequalities, ethical challenges and of course the threat of autonomous weapons. Moreover, unchecked AI could erode human qualities, disrupt financial markets and unleash uncontrollable self-aware AI. As we navigate this evolving landscape, responsible development, regulation and ethical considerations are essential to harness AI’s potential while mitigating its risks. — Techiexpert.com, 3d ago
Nina Bryant, Senior Managing Director in the Technology segment at FTI Consulting, turns the spotlight on ethical considerations: “Research has identified critical risks within AI algorithms, including racial, gender and socioeconomic biases and age verification issues, alongside significant data protection risks.” Bryant also highlights the significant regulatory focus: “There is also significant regulatory focus, with different jurisdictions taking alternative approaches, especially around assessing risks, transparency, explainability and accountability, which will contribute to making AI governance equally challenging and important.”... — technologymagazine.com, 3d ago
In its drive to develop AI technologies, Samsung remains committed to ensuring safe AI usage. The company's AI Red Team is responsible for proactively identifying and monitoring potential security and privacy issues that may arise through the entire AI development and deployment process. This is done while keeping the principles of AI ethics at the forefront. — ChannelLife New Zealand, 3d ago
The departure of key personnel earlier this year marked a critical juncture for Stability AI. The reasons behind these exits remain undisclosed, but the departure of the head of audio sheds light on a fundamental disagreement regarding the ethical considerations of training generative AI models on copyrighted material. This departure and subsequent resignations paint a picture of internal discord contributing to the company’s current challenges. — CoinGenius, 3d ago
The findings raise important questions about upholding research integrity as AI text generation technology continues progressing rapidly. There is an evident need for enhanced detection systems and continued discourse on ethical considerations surrounding the authenticity of online content and appropriate uses of AI. — Science Times, 3d ago
It "could set an implicit standard that services offered by private entities must surpass," he said. "Widely available public models and compute infrastructure would yield numerous benefits to the U.S. and to broader society. It would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users whose insatiable appetites for AI integrations may outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use."... — schneier.com, 3d ago
...“Building upon A Manifesto In Defense of Democracy and the Rule of Law in the Age of ‘Artificial Intelligence’, we, the Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of ‘Artificial Intelligence’, have reconvened to draft a second consensus manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems. In doing so, we recognise the important and complementary role of standards and compliance practices. Whereas the first manifesto focused on the relationship between democratic law-making and technology, this second manifesto shifts focus from the design of law in the age of AI to the enforcement of law. Concretely, we offer 10 recommendations for addressing the key enforcement challenges shared across transatlantic stakeholders. We call on those who support these recommendations to sign this manifesto. The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will take place on November 30th and December 1st, 2023. It will delve into pressing governance challenges posed by foundation models and generative AI across jurisdictions.”... — bespacific.com, 3d ago
Since powerful generative AI burst into wide use a year or so ago, the challenge for in-house lawyers has been two-fold: the legal and compliance risks for the business need to be understood and managed, as does how the technology can be used within the legal team for speed and efficiency. We will discuss both sides of this coin with experts from our AI legal advisory team and from Osborne Clarke Solutions, our legal technology team. — osborneclarke.com, 3d ago
As we peer into 2024, a tapestry of uncertainties at the intersection of business and technology looms large. With its transformative yet controversial potential, the generative AI hype continues to shape industries. Economic uncertainty remains a constant companion, injecting dreaded unpredictability into the business world. The widening skill gap and labor challenges are pressing concerns demanding more aggressive innovative solutions. These macro trends are trickling downstream, impacting organizations of all sizes across industries. Our predictions for 2024 shed light on how these complex forces manifest in enterprise software. — Best ERP Software, Vendors, News and Reviews, 3d ago
The study by Miller et al. has far-reaching implications beyond technology and psychology and into the real world. The fact that there are still racial biases in AI-generated faces underscores the need for more diverse training data in the development of AI. If AI is trained on human data and human data is biased, then of course AI will be biased. The problem, though, is that by reflecting existing inequalities, AI risks exacerbating them if left unaddressed. In fields heavily reliant on AI, such as facial recognition for security purposes, it matters that white faces are differentially recognized compared to other racial groups. — Freethink, 3d ago
In parallel to these announcements, the State Bar of California’s Committee on Professional Responsibility and Conduct’s (COPRAC) released practical guidance for the use of generative AI in the practice of law on November 16. Importantly, at the outset, COPRAC explained that “the existing Rules of Professional Conduct are robust, and the standards of conduct cover the landscape of issues presented by generative AI in its current forms. However, COPRAC recognizes that generative AI is a rapidly evolving technology that presents novel issues that might necessitate new regulation and rules in the future.”... — Above the Law, 3d ago
Generative AI can offer useful tools across the recruiting process, as long as organizations are careful to make sure bias hasn’t been baked into the technology they’re using. For instance, there are models that screen candidates for certain qualifications at the beginning of the hiring process. As well-intentioned as these models might be, they can discriminate against candidates from minoritized groups if the underlying data the models have been trained on isn’t representative enough. As concern about bias in AI gains wider attention, new platforms are being designed specifically to be more inclusive. Chandra Montgomery, my Lindauer colleague and a leader in advancing equity in talent management, advises clients on tools and resources that can help mitigate bias in technology. One example is Latimer, a large language model trained on data reflective of the experiences of people of color. It’s important to note that, in May, the Equal Employment Opportunity Commission declared that employers can be held liable if their use of AI results in the violation of non-discrimination laws – such as Title VII of the 1964 Civil Rights Act. When considering AI vendors for parts of their recruiting or hiring process, organizations must look carefully at every aspect of the design of the technology. For example, ask for information about where the vendor sourced the data to build and train the program and who beta tested the tool’s performance. Then, try to audit for unintended consequences or side effects to determine whether the tool may be screening out some individuals you want to be sure are screened in. — Hunt Scanlon Media, 3d ago
The risks associated with generative AI have been well-publicized. Toxicity, bias, escaped PII, and hallucinations negatively impact an organization’s reputation and damage customer trust. Research shows that not only do risks for bias and toxicity transfer from pre-trained foundation models (FM) to task-specific generative AI services, but that tuning an FM for specific tasks, on incremental datasets, introduces new and possibly greater risks. Detecting and managing these risks, as prescribed by evolving guidelines and regulations, such as ISO 42001 and EU AI Act, is challenging. Customers have to leave their development environment to use academic tools and benchmarking sites, which require highly-specialized knowledge. The sheer number of metrics make it hard to filter down to ones that are truly relevant for their use-cases. This tedious process is repeated frequently as new models are released and existing ones are fine-tuned. — CoinGenius, 4d ago
...10, a, About, acknowledging, across, across Industries, actively, addressing, advice, affecting, Age, ahead, AI, AI applications, ai research, ai workforce, already, also, amp, and, any, Application, applications, approach, ARE, article, Artificial, artificial intelligence, AS, At, augment, authenticating, aware, Away, Balance, balance between, BE, Behind’, being, believes, beneficial, benefits, Betterment, between, beyond, bias, BIG, Big Data, Book, both, breakthroughs, But, calls, CAN, captivating, care, causing, centered, challenge, challenges, ChatGPT, Close, CO, code, combines, commitment, community, complex, complexities, computer, computer science, computer vision, concerns, consideration, content, continues, contribution, contributions, conversations, could, counteract, create, creation, crucial, data, day, dedicated, dedication, deep, deep learning, definition, delves, deploy, detrimental, develop, developments, digital, Digital Age, digital content, Director, discussions, disinformation, diversifying, Doctors, documents, doesn, Doesn’t, driving, Early, Education, emphasizes, encourages, Enforcement, engage, engaging, enhance, Enhancing, Ensures, envisions, essential, ethical, ethical AI, evolution, evolve, excited, existential, experienced, Experiences, Explores, exploring, extends, facial, facial recognition, Facial Recognition Technology, FAR, fascinating, FEI, few, field, Focus, For, Force, forefront, formidable, from, Frontier, future, Future of AI, generative, Generative AI, Glimpse, govern, governance, governing, groundbreaking, harm, healthcare, heart, Help, helping, HER, highlights, How, How To, However, human, Humanity, i, images, Impact, impactful, importance, improving, in, Including, Indeed, industries, influence, Innovation, insights, Inspirational, instance, Institute, Intelligence, Intelligent, interact, into, intricacies, Introduction, invaluable, Is, issue, issues, IT, ITS, journey, Key, kinds, lab, landscape, language, large, large language models, Law, law enforcement, LEARN, learning, li, Life, light, like, limitations, looking, Machines, make, medical, meet, mind, models, more, multistakeholder, Navigating, Need, Nonprofit, Nothing, Nuance, of, Offers, on, ONE, or, organization., our, outperforms, part, particularly, passion, patient, patient care, People, perspective, physical, pioneer, Pioneering, pivotal, plato, Plato Data Intelligence, PlatoData, play, point, policy, policymakers, potential, potentially, present, preventing, Problem, Professor, promise, promises, provides, providing, published, quality, Question, questions, racial bias, raises, rapid, read, real, real world, recently, recognition, Regulate, regulating, Regulation, regulations, RELATED, Remain, remarkable, require, research, responsibility, responsible, revolution, revolutionized, Right, risks, robotics, role, roles, rules, s, save, save time, saving, Saving Time, say, scenarios, Science, section, see, shaping, she, shed, Short, should, significant, Society, solution, Speaks, spread, stanford, Stanford University, still, Strike, Such, surrounding, T, takes, Technology, thanks, that, The, The Future, the world, These, this, thoughts, Through, time, to, tool, tools, top, Top 10, towards, trailblazer, Transformation, underscores, unique, university, urgency, us, use, Used, using, Videos, vision, visionary, was, we, while, will, with, within, without, words, Work, Workforce, working, world, world’s, write, written, years, zephyrnet... — Zephyrnet, 28d ago
In conclusion, generative AI is a transformative force, redefining the limits of what’s possible in the digital age. As we integrate this technology into our workplaces and daily lives, we must also navigate its challenges and ethical implications with care. The journey into the AI-augmented future of work is not just about technological advancement but also about responsible innovation and sustainable progress. — Powell Software, 20d ago
According to Chair of the Board of CLAIRE directors, Holger Hoos, “Europe has a storied history of rising to technological challenges and emerging with global solutions. From CERN to the European Space Agency, we’ve turned collaboration into innovation. Now, as AI begins to permeate every aspect of our work and lives, it’s imperative we forge our own path, ensuring the broad availability of trustworthy AI systems with European values at their core.”... — aihub.org, 21d ago
The implications of AGI are profound, potentially leading to transformative changes across all sectors of society. However, the path to AGI is fraught with ethical and existential risks. The concerns expressed by OpenAI’s staff ahead of Sam Altman’s temporary dismissal underscore the gravity of advancing AI technologies. The ability to solve mathematical problems is a significant milestone, but it does not immediately herald the arrival of AGI or superintelligence. — IO, 7d ago
Another opportunity for generative AI is in enhancing cybersecurity training – basing it on actual attacks, and therefore more real, personal, timely and engaging than the mandatory, periodic, and simulated awareness training sessions. At Barracuda, we’re building functionality that uses generative AI to educate users when a real-world cyber threat, such as a malicious link is detected in an email they’ve received. We believe this will provide an impromptu opportunity to train the user if they fall for the attack when clicking through the malicious link, ultimately increasing the firepower against threats and changing user behavior when faced with risk. In many ways, it’s a way to build immunity and awareness against real threats in the parallel universe of cyberspace. — CXOToday.com, 4d ago
As with general AI regulation, there are normative and functional challenges to deploying effective use-case regulation. At its core, use-case regulation inherently requires strengthening consumer and civil rights protections across the board, a challenging endeavor in the current US political climate. Furthermore, coordinating and maintaining consistency across regulations can be difficult, especially if a platform or technology concerns multiple agency jurisdictions, and risks fragmentation potentially leading to more harm and weaker protections than currently available. — Tech Policy Press, 4d ago
...“Broadcasters are committed to delivering trusted, fact-based local and national news and are investing heavily to ensure stories are verified before they are aired,” LeGeyt said during the Senate AI Insight Forum "Transparency, Explainability, Intellectual Property and Copyright.” “While many broadcasters are responsibly embracing AI tools for operational efficiencies, such as scripting commercials and first drafts of content for human review, AI presents challenges to the critical local journalism broadcasters provide.”... — TVTechnology, 4d ago
However, it is imperative to ensure transparency in AI decision-making processes, enabling audits and reporting for regulatory compliance. With a sophisticated approach to AI’s role in meeting financial regulations, compliance can be future-proofed in the face of evolving challenges. — CoinGenius, 4d ago
Even one-year predictions are complex when it comes to such transformative and fast-evolving technology like AI. With their state-of-the-art semantic capabilities, ChatGPT-like models have the potential to transform the search experience for customers. This decades-old transverse problem faces today’s issues related to data privacy or hallucination and factuality. Research that will be conducted over the next year, both at the fundamental levels of AI science and on the engineering side, will enable for customers and businesses to better make use of their data. — Fast Company, 4d ago
In today’s episode, Sira asks if art made by AI can truly be considered art. I tackle this complicated question by examining art as an expression of imagination, noting that perception of art is highly subjective. I discuss arguments around human versus machine creation, exploring the creative process behind AI art prompts. I also cover complex legal issues of copyright and training data usage that remain unsettled globally. Ultimately art is in the eye of the beholder, but there are many ethical debates around AI’s role that merit further discussion. Tune in to hear perspectives on what constitutes art, creative intent, and considerations for responsible AI development. — Christopher S. Penn - Marketing Data Science Keynote Speaker, 4d ago
Top
In summary, generative AI holds great promise in enhancing the human experience across various industries, but its implementation must be guided by ethical practices and governance frameworks. As we've seen in these examples, issues of privacy, transparency, accuracy and fairness must be carefully addressed. Failure to establish these ethical boundaries can result in potential harm to customers, from privacy breaches to misinformation. See this link for some useful thoughts on how we can manage risks and preserve trust with generative AI. Businesses that wish to leverage generative AI for better human experiences must not only invest in the technology but also in ethical practices and governance — start today! We’re already "too late" as Sam Altman, formerly of OpenAI and now at Microsoft, implies in his pleas to governments and regulators to lean in and take charge of what’s certain to be the most transformative invention human-kind has seen. — CMSWire.com, 14d ago
As we embrace the power of open source and AI, we must also address challenges like data privacy, bias in AI models, and the need for effective AI regulation to ensure responsible and ethical use of these technologies. Transformative changes in industries, driving efficiency, automation, and safety to new heights, are expected as a result. — Grit Daily News, 11d ago
Embrace Longtermism. Longtermism, the ethical view that prioritizes positively influencing the long-term future, is a moral imperative in an era of exponential technological advancements and complex challenges like the climate crisis. Embracing longtermism means designing policies that address risks as we transition from sub-human AI to greater-than-human AI. For the European Commission, initiatives to address AI challenges should not be viewed as mere regulation but as a unique opportunity to etch its commitment to a secure, ethical AI future into history. A longtermism perspective in AI matches with the idea of ‘AI Alignment’ put forth by numerous scholars[9], which addresses diverse concerns related to AI safety, aiming to ensure that AI remains aligned with our objectives and avoids unintended consequences of going astray. — Modern Diplomacy, 20d ago
There are ethical concerns surrounding AI content creation, such as transparency and the potential for misleading information. A heavy dependence on AI can curb human creativity, and additional resources are often needed for AI training and incorporation into existing workflows. Additionally, AI’s unpredictable nature can yield off-brand or inappropriate content. Lastly, AI’s originality might not match the depth a human writer provides. — Yoast, 4d ago
Ultimately, there are simply too many unknowns and far too wide of gaps in regulations to risk constructing generative AI from scratch — by the time it becomes useful, it will largely be outdated. Companies should instead pay attention to everything a vendor does outside of AI. Do they act responsibly and with the utmost transparency? Do they have a history of siding with consumers on hot button topics like privacy? How quickly does the company release updates to its software and how stable are releases, generally, upon initial release?... — diginomica, 4d ago
The continued maturing of ethical AI in cleantech has the potential to extinguish existing challenges, including those of large scale mis- and disinformation. Instead of posing threats to the already fraught landscape of misleading claims about climate change, cleantech can mandate safeguards as part of practice and self-regulate egalitarian measures, even when governmental policies fall short of imposing ethical AI use mandates. — CleanTechnica, 4d ago
Looking forward there is quite rightly a growing concern around ethics and generative AI. The problem for many businesses is that regulations can differ from region to region. Our global expertise is proving invaluable for customers, as we guide them through everything from the EU’s more prescriptive approach to the US’ focus on self-regulation and California’s more stringent standards. Our customers can move forward, confident they fully understand the implications of the rules in each country they operate in. — aimagazine.com, 4d ago
...“Generative AI is a key investment area and innovation focus for us for over 3 years. However, before incorporating emerging tech, especially in AI, it’s crucial to evaluate if there’s a clear path to generate value for specific business problems,” Pavan Kumar, senior manager at Tredence Studio, told AIM in an exclusive interaction.Kumar further elaborated on the importance of due diligence when it comes to generative AI. “Many AI models are trained on generic datasets, so adapting them to address specific issues requires incorporating business and process context to these models. Stability is another consideration; generative AI, in particular, needs to provide repeatable responses over time,” he said. — Analytics India Magazine, 4d ago
Algorithms employed in educational institutions for admissions or placement may inadvertently favor certain demographics, potentially excluding deserving candidates. This can hinder the pursuit of education as a pathway to social mobility. For example, Children from Black and Latino or Hispanic communities, who are frequently disadvantaged in terms of digital access, will experience heightened disparities if we excessively digitize education without addressing the potential biases of predominantly white developers behind AI systems. Moreover, the effectiveness of AI hinges on the knowledge and perspectives of its creators. Consequently, their biases can result in both technological shortcomings and magnified biases in reality. — Emeritus Online Courses, 4d ago
Technology, particularly AI, has become increasingly important in post pandemic society, and this sub-theme examines how universities can leverage technology to improve their internationalization efforts as well as explore how AI is and will impact the field of international education. What technologies are useful in internationalization efforts? What needs to change in internationalization strategies to make better use of technology? What ethical considerations should be taken into account when using these technologies such as generative AI? How can these technologies help facilitate intercultural communication and understanding in international education and what might some of the challenges be?... — Association of International Education Administrators (AIEA), 13d ago
These tools are likely to evolve with dizzying speed over the next months as organisations worldwide look to leverage the next generation of generative AI models too. These will throw up more ethical questions in turn, as the capabilities of the models and the variety of their use cases increase. Both AI-generated presenters and an increase in fake videos as part of the ongoing US election cycle are on the roadmap for 2024, and companies will need to understand the challenges they represent and how they can be met with equal speed. — TVBEurope, 6d ago
It has already introduced new ways to ask and answer questions, synthesize information, conduct research, and even make art. These qualities, the ability to understand ideas and create culture are the very foundation of our humanity. We must work to preserve them as they become influenced by artificial tools. Perhaps most importantly, AI’s influence on these capacities is not neutral. These tools like the humans who make them are biased. We must define what values lie at the core of our human experience and create technological tools that support them. Our second witness, Sharon Vallor, will be a helpful resource in understanding these ethical questions. She studies the way that new technologies reshape our habits, our practices and moral character. With her help, we can understand the values embedded in these technologies and the effect that it’ll have on our human character. And finally, we’ll explore AI through a constitutional law framework. — Tech Policy Press, 25d ago
Faster claims processing through generative AI is one of the key drivers of generative AI in insurance market. Generative AI speeds up the claims process and automates the data analysis process, highlighting any anomalies and ensuring genuine claims by quickly resolving them. Furthermore, generative AI redefines customer interactions with insurers through advanced chatbots and virtual assistants. These AI-powered assistants handle routine queries and engage in sophisticated conversations, understanding complex customer needs and offering personalized recommendations for policies and coverage options. Thus, responsive and efficient customer service is a key driver behind the generative AI in insurance market growth. In addition, generative AI is used to simulate different risk scenarios based on historical data and calculate the premium accordingly. For instance, by learning from previous customer data, generative models produce simulations of potential future customer data and their potential risks. These simulations can be used to train predictive models to better estimate risk and set insurance premiums, which drives the adoption of generative AI in insurance industry. However, data quality and regulatory challenges have emerged as significant barriers to the growth of the generative AI in insurance market. Moreover, due to the required computational power, generative AI technology may be costly and difficult to implement. Enterprises are facing new challenges when integrating generative AI with their existing technical infrastructures. Thus, high implementation cost of generative AI hampers the growth of the generative AI in insurance market. On the contrary, risk modeling and underwriting advancements, and the adoption of explainable AI (XAI) for transparency are expected to offer lucrative growth opportunities to the generative AI in insurance market trends in the upcoming years. — alliedmarketresearch.com, 16d ago
In the frenzy to champion the potential of trustworthy AI, the recent moves from tech giants offer a reflective pause about one of the most, if not the most important aspects of AI, which, paradoxically, is seldom discussed: the challenge it poses to human intelligence. — The European Business Review, 25d ago
Dhamani and Engler co-authored Introduction to Generative AI, to be published by Manning Publications. Introduction to Generative AI illustrates how LLMs could live up to their potential while building awareness of the limitations of these models. It also discusses the broader economic, legal, and ethical issues that surround them, as well as recommendations for responsible development and use, and paths forward. — Tech Policy Press, 24d ago
Latest
Societal implications: What theoretical perspectives can illuminate predictive and generative AI adoption’s ethical and social implications, such as equality, intellectual property, privacy, and security concerns? How do organizations navigate the ethical dilemmas related to AI technologies? How does AI support or hinder the ability of corporations, international organizations, and social movements to address grand challenges and other social problems? What are the positive and negative implications of AI adoption on the planet, such as, for example, its impacts on water consumption, carbon emissions, and deforestation?... — AOM_CMS, 4d ago
But can AI be intelligent according to this definition? Understanding implies an understanding of the reasons for statements or decisions, something that AI so far cannot provide, and AI also does not have opinions because it is not a personality. According to Evgeny Morosov (2023), a severe critic of the cultural implications of AI, the concept of intelligence underlying AI concentrates on the mostly individual solving of problems with AI, that is, with perception and prediction—the two tasks that deep learning knows how to carry out with the help of huge data. — Open Access Government, 4d ago
Newswise — In a time when the Internet has become the main source of information for many people, the credibility of online content and its sources has reached a critical tipping point. This concern is intensified by the proliferation of generative artificial intelligence (AI) applications such as ChatGPT and Google Bard. Unlike traditional platforms such as Wikipedia, which are based on human-generated and curated content, these AI-driven systems generate content autonomously - often with errors. A recently published study, jointly conducted by researchers from the Mainz University of Applied Sciences and Johannes Gutenberg University Mainz (JGU), is dedicated to the question of how users perceive the credibility of human-generated and AI-generated content in different user interfaces. More than 600 English-speaking participants took part in the study.As Professor Martin Huschens, Professor for Information Systems at the Mainz University of Applied Sciences and one of the authors of the study, emphasized: "Our study revealed some really surprising findings. It showed that participants in our study rated AI-generated and human-generated content as similarly credible, regardless of the user interface." And he added: "What is even more fascinating is that participants rated AI-generated content as having higher clarity and appeal, although there were no significant differences in terms of perceived message authority and trustworthiness – even though AI-generated content still has a high risk of error, misunderstanding, and hallucinatory behavior."The study sheds light on the current state of perception and use of AI-generated content and the associated risks. In the digital age, where information is readily available, users need to apply discernment and critical thinking. The balance between the convenience of AI-driven applications and responsible information use is crucial. As AI-generated content becomes more widespread, users must remain aware of the limitations and inherent biases in these systems.Professor Franz Rothlauf, Professor of Information Systems at Johannes Gutenberg University Mainz, added: "The study results show that – in the age of ChatGPT – we are no longer able to distinguish between human and machine language and text production. However, since AI does not 'know', but relies on statistical guessing, we will need mandatory labeling of machine-generated knowledge in the future. Otherwise, truth and fiction will blur and people cannot tell the difference." It remains a task of science communication and, not least, a social and political challenge to sensitize users to the responsible use of AI-generated content. — newswise.com, 4d ago
Centre for International Governance Innovation, 11d ago
When AI starts by building extremely general models and then attempting to apply them to specific educational situations, risks abound. Thus, a second aspect of my proposal suggests that our efforts towards powerful, safe AI should begin with well-bounded problems. One that seems well suited to today’s AI is determining how to provide optimal supports for learners with disabilities to progress in mathematics problem solving. Although I believe parents are not willing to share their students’ data in general, I can imagine a collective of parents becoming highly motivated to share data if it might help their specific neurodiverse student thrive in mathematics. Further, only limited personal data might be needed to make progress on such a problem. Thus a second element of my proposal is (2) energize nonprofits that work with parents on specific issues to determine how to achieve buy-in to bounded, purpose-specific data sharing. This could involve a planning grant stage, which if successful, would result in money needed to establish a local privacy-protected method of sharing data. — The Thomas B. Fordham Institute, 4d ago
Trust is deeply relational (Scheman 2020, Knudsen et al, 2021, Baier 1986), and has been understood in terms of the vulnerabilities inherent in relationships (Mayer et al 1995). Yet discussions about trust in AI systems often reveal a lack of understanding of the communities whose lives they touch — their particular vulnerabilities, and the power imbalances that further entrench them. Some populations are expected to simply put their trust in large AI systems. Yet those systems only need to prove themselves useful to the institutions deploying them, not trustworthy to the people enmeshed in their decisions (Angwin et. al 2016, O’Neill 2018; Ostherr et. al 2017). At the same time, researchers often stop upon asking whether we can trust algorithms, instead of extending the question of trust to the institutions feeding data into or deploying these algorithms. — Data & Society, 4d ago
However, these guidelines notably clarify more contentious AI issues, such as regulating image-generating models, deepfakes, and data collection methods. These topics, while crucial, have sparked debate and legal challenges within the AI community, particularly around copyright infringement concerns. Simultaneously, Canada’s Security Intelligence Service (CSIS) has raised alarms over using AI-generated deepfakes in disinformation campaigns, highlighting the risks of privacy violations, social manipulation, and bias inherent in AI technologies. This concern has prompted calls for more comprehensive policies and international collaboration to address the challenges posed by AI’s rapid advancement. Moreover, OpenAI and Microsoft are currently grappling with a lawsuit alleging unauthorized use of authors’ work in training AI models, a case that brings to the fore the complex legal and ethical dimensions of AI development. — CoinGenius, 4d ago
Compliance with copyright, privacy laws, and licensing agreements related to the use of LLMs is also essential for Nagarro. “We must ensure that it operates within legal boundaries. Moreover, ensuring responsible and ethical AI use is a challenge. This includes addressing issues like hallucinations, misinformation, and bias in AI-generated content. — Analytics India Magazine, 4d ago
...“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more sceptical than enthused. Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period. We did see some cybercriminals attempting to create malware or attack tools using LLMs, but the results were rudimentary and often met with scepticism from other users. In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us”, said Christopher Budd, director, X-Ops research, Sophos. — TahawulTech.com, 4d ago
In the last few years Large Language Models (LLMs) have risen to prominence as outstanding tools capable of understanding, generating and manipulating text with unprecedented proficiency. Their potential applications span from conversational agents to content generation and information retrieval, holding the promise of revolutionizing all industries. However, harnessing this potential while ensuring the responsible and effective use of these models hinges on the critical process of LLM evaluation. An evaluation is a task used to measure the quality and responsibility of output of an LLM or generative AI service. Evaluating LLMs is not only motivated by the desire to understand a model performance but also by the need to implement responsible AI and by the need to mitigate the risk of providing misinformation or biased content and to minimize the generation of harmful, unsafe, malicious and unethical content. Furthermore, evaluating LLMs can also help mitigating security risks, particularly in the context of prompt data tampering. For LLM-based applications, it is crucial to identify vulnerabilities and implement safeguards that protect against potential breaches and unauthorized manipulations of data. — CoinGenius, 4d ago
...“As ethical considerations surrounding AI become more prominent, it is important to take stock of where the recent developments have taken us, and to meaningfully choose where we want to go from here. The responsible future of AI requires vision, foresight, and courageous leadership that upholds ethical integrity in the face of more expedient options. Explainable AI, which focuses on making machine learning models interpretable to non-experts, is certain to become increasingly important as these technologies impact more sectors of society, and both regulators and the public demand the ability to contest algorithmic decision-making. Both of these subfields not only offer exciting avenues for technical innovation but also address growing societal and ethical concerns surrounding machine learning.”... — electronicspecifier.com, 4d ago
AI is a powerful tool, and we know we can’t fully grasp how much it will impact our lives. In this case of neurodegenerative decline, AI plus neurotech could help patients and families adapt to painful realities. But Kruse also acknowledges the dystopic “Black Mirror-esqe” concept of capturing a person’s voice and memories with AII. For good or ill, we would be creating digital echoes of loved ones that could endure long after that person dies. As this field progresses and AIs become more powerful, we will need to grapple with the ethical and societal questions of a preserved digital life after physical death. — synbiobeta.com, 4d ago
According to KPMG’s latest CEO Outlook almost a quarter (22%) of UK CEOs said the top benefit of implementing the use of generative AI in their organisation was increased profitability, followed by new product and market growth opportunities (19%) and increased innovation (17%). Despite a willingness to push forward with their investments, global CEOs cited ethical challenges as their number one concern in terms of the implementation of generative AI. — techuk.org, 4d ago
AI is at the forefront of innovation, accelerating and driving seismic cross-industry changes, and enabling progress measured in weeks and months rather than years. We’re seeing Big Tech driving innovation in AI capabilities and the Open Source community pushing the boundaries of AI application – and in turn, endless exciting opportunities play out in real time. But with innovation comes a need for regulation which needs to be agile and encourage great leaps forward while also addressing immediate and real ethical issues. This is evidenced in the recently signed Bletchley Declaration on AI Safety which has seen 28 countries coming together to announce a new global effort to unlock the enormous benefits offered by AI and ensuring it remains safe. — advertisingweek.com, 4d ago
Wills tells The Drum that legal and ethical concerns mean the uses for AI tools are capped – they’re not being deployed on live briefs or final outputs. “We're very cognizant of the ethical side of AI. There are massive issues around IP and copyright. You have to deploy it in a way that is fundamentally not related to the final product,” he says. — The Drum, 4d ago
The post can be viewed in its entirety at https://www.jasonmcdonald.org/blog/2023/11/the-evolving-role-of-ai-in ... ll-matter/. Here is background on this release. In the ever-evolving landscape of digital advertising, artificial intelligence (AI) has undeniably emerged as a game-changer. From Google Ads to Search Engine Optimization (https://www.jasonmcdonald.org/seo-consultant/) and social media (https://www.jasonmcdonald.org/social-media/), AI has revolutionized these domains by optimizing ad campaigns, providing valuable insights for SEO strategies, and curating personalized content for users. However, amidst this AI revolution, it's crucial to recognize the enduring importance of human involvement in crafting advertising strategies and upholding ethical practices. AI, while proficient at data analysis, cannot replace the creativity, empathy, and intuition that humans bring to the table. Furthermore, ethical concerns in digital advertising underscore the need for human oversight to mitigate biases inherent in AI algorithms. — 24-7 Press Release Newswire, 4d ago
Latest
The report shows that 54% of AI users expect increased productivity from generative AI tools, with only 4% foreseeing a reduction in workforce. Currently, the most common applications are in programming, data analysis, and customer-facing applications, with significant use in marketing and content generation. Companies are also cautious, testing for unexpected outcomes, security vulnerabilities, and issues related to fairness, bias, ethics, and privacy. — electronicspecifier.com, 4d ago
The highly public rise of AI has also led to widespread concern. According to Streem’s report, 20% of Australians believe AI could pose a risk of human extinction within the next two decades, while 57% believe AI creates more problems than it solves. These public concerns indicate a need for policy frameworks that balance the benefits of AI with the public interest. Future education programs could provide guardrails for the use of AI while also addressing ethical concerns with its usage. — The Mandarin, 4d ago
Get consent. Never enter personal identifiable information. Know the latest terms and conditions of software tools. Get consent. Train staff and students. Get consent. And just in case it wasn't clear: Get consent.Education lawyer Gretchen Shipley had more ground to cover and questions to answer than she had time for on Tuesday, giving a 50-minute talk on the legal implications of generative AI (GenAI) for education to a room full of school technologists at the annual California IT in Education Conference in Sacramento. But for all the looming questions in these unregulated Wild West days of GenAI, she did not think it was too soon to say safety reviews of school technology tools, regular vetting of terms and conditions, and certainly student and parent consent agreements are essential places to start.To illustrate legal problems posed by emerging AI tools, Shipley took an example from her own professional experience: note-taking apps. — GovTech, 4d ago
It was refreshing to see the spotlight thrown on some of the more immediately pressing concerns associated with generative AI, with topics such as transparency, explainability, accountability, fairness, bias, and privacy and data protection being given due prominence. Many civil society members and notable experts have long been highlighting these concerns. Still, in the lead-up to the summit, there was a danger that these very real issues were going to be overlooked in favor of the headline-grabbing ‘existential’ risks of AI with comparisons made to the pandemic and nuclear war. — The Drum, 27d ago
Gretel.ai provides an example of an exciting startup that is positioned in a market set for rapid growth. At a time where the media has focussed so much on the threat of Generative AI towards humanity (for the avoidance of doubt the author remains sceptical about Skynet or similar suddenly appearing in the near to medium term future) it is great to see Generative AI being used in a positive manner that may help advance humanity with solutions in healthcare, FinTech, Insurtech and also materially enhance data privacy. Generative AI when used appropriately may in fact help alleviate the actual problems that we face with AI in the real world (data availability, quality, privacy and imbalances that may result in ethical issues). — bbntimes.com, 6d ago
Before TRUST, the stock media industry faced potential problems related to using unlicensed data for training AI systems. This raised questions about copyright infringement and fair compensation for creators whose work contributes to developing these powerful algorithms. In response to these challenges, Shutterstock has unveiled the TRUST framework, which outlines five critical ethical AI principles the company commits to follow. — MarkTechPost, 18d ago
The EU AI Act, in its current form, risks creating a regulatory environment that is not only burdensome and inappropriate for open-source AI developers but also counterproductive to the broader goals of fostering innovation, transparency, and competition in the AI sector. As the EU’s ongoing negotiations over the AI Act continue, particularly around the regulation of foundation models, policymakers need to adequately address these issues. If they do not amend the Act to better accommodate the unique nature and contributions of open-source AI, it could hamper the progress and openness in the AI sector. It is crucial for policymakers to recognize and preserve the distinct advantages that open-source AI brings to the technological landscape, ensuring that regulations are both effective and conducive to the continued growth and dynamism of the AI field. — itif.org, 14d ago
AI-generated music is raising concerns among artists and potentially undermining their unique voices and creative work. As AI technology continues to advance, artists must contend with the challenge of maintaining their artistic identity and livelihoods in a landscape where their own imitations vie for attention and recognition. It's important to use artificial intelligence carefully in songs as there are copyright and legal issues involved. Artificial intelligence is meant to empower artists, not steal their work. It's crucial to find the right balance for an ethical AI in the music industry. — bbntimes.com, 25d ago
Moving on to the ethical aspects of AI, Dr. Deandra Cutajar addressed the inherent biases present in data, acknowledging that bias is a reflection of human input. She highlighted the need for understanding the source of bias, distinguishing between discriminatory and non-discriminatory bias, and employing human intervention to mitigate biases. — AIBC, 4d ago
...“This very useful guide represents the peer-reviewed work of AI experts from over 20 international law enforcement and intelligence agencies," said John Riggi, AHA’s national advisor for cybersecurity and risk. "AI clearly represents novel security and privacy risks, which may not be fully understood by developers and users of AI systems, such as the consequences of corrupted or harmful outputs due to ‘adversarial machine learning.’ As indicated in the guide, the best way to mitigate the emerging threats and risks related to the rapid expansion of AI in health care is to ensure that the developers of AI technology closely follow the principles of ‘secure by design’ and work closely with end users in the deployment and management of AI systems. It is also recommended that health care organizations form multidisciplinary AI governance and risk committees to identify, assess and manage risk related to AI technology at acquisition stages and throughout the life-cycle of the technology. The NIST AI Risk Management Framework is another useful resource to supplement the above guide.”... — American Hospital Association | AHA News, 5d ago
Generative AI chat applications have captured the public’s imagination and helped people understand what is possible, but there are still barriers that prevent people from using these solutions at work. Specifically, these chat applications do not know an organisation’s business, data, customers, operations, or employees. Additionally, these solutions were not initially built with the security and privacy features that organisations need for employees to safely use them in their day-to-day work. This has led to companies adding these features to their assistants after they were built, which does not work as well as incorporating security into the assistant’s fundamental design. — technologymagazine.com, 5d ago
Data acquisition: AI needs data from sensors, IIoT devices, energy assets and this data traverses through OT and IT environments. This creates all sorts of data security challenges. The threat surface increases when manipulated data is fed into AI models and misleading controls and people. This can have fatal consequences. The security technologies currently being used in energy are aged and often inadequate for next gen applications. A hack in 2022 paralyzing 11GW of German wind turbines is a great example of security weaknesses. Holistic end2end data security strategies, technologies and protocols can overcome these weaknesses. Zero-trust principles must be implemented. New technologies like Explicit Private Networking (XPN) protect encrypted data and commands while in transit across untrusted network segments and while resting within a data store. Data packages are cryptographically signed at the source and can be verified at any time to ensure no tampering or corruption. — Energy Central, 5d ago
In Introduction to Interactive Media, Lawley’s students are using multiple generative AI tools for tasks ranging from drafting an outline for a persuasive argument essay to creating simple graphics for a website prototype. Each time, Lawley asks students to critically assess the materials created and consider the ethical issues related to these tools. — RIT, 5d ago
Acknowledging the challenges associated with generative AI, the Bengaluru based tech giant has implemented a control framework, emphasising responsible usage. Initiatives include dedicated environments for developing generative AI solutions, GDPR-compliant training, and efforts to detect AI-generated misinformation. They have also established an AI Council to set development and usage standards, emphasising ethical guidelines, fairness, and privacy. — Analytics India Magazine, 12d ago
AI helped lift John Lennon’s voice from backing piano music of a 1970s recording to create a new Beatles song “Now and Then”; the same week brought us two important AI governance documents. In the US, the Biden administration released an “Executive Order on the safe, secure, and trustworthy development of AI”. In contrast, the UK's "Bletchley Declaration" under Sunak's direction took a different route. Who Australia decides to follow will impact our future AI governance. Whilst both documents seek to ensure a better AI-enabled world, their approaches couldn’t be more different. The US's approach is comprehensive and nuanced, reflecting diverse AI-experts’ perspectives: it exhibits a strong focus on human-centric principles, recognising AI as a deeply human-centric issue with wide-reaching socio-political implications. Meanwhile, the UK's stance is steeped in existential risk rhetoric, seemingly echoing the concerns of a particular faction frequently labelled the AI-Safety community, which tends to concentrate on the long-term implications of potential artificial general intelligence. These documents echo the AI research community's polarities: the immediate effects ("the Now") and the potential future risks ("the Then"). Australia's choice in AI governance mirrors the task of isolating Lennon's voice from its piano backdrop—distinguishing immediate human-centric concerns from the distant hum of existential risks. We stand at a juncture: to tune into the 'Now' with the US's inclusive approach or to anticipate the 'Then' through the UK's speculative lens.Scorecard: The US Exec Order – 4 out of 5 stars The Bletchley Declaration – 2 out of 5 stars Now and Then – 4.5 out of 5 stars"... — Scimex, 28d ago
As AI receives wider market adoption and is implemented as a tool across use cases, more challenges appear. These projects run into concerns that weren’t apparent at infancy of AI–one critical and persisting issue is ethical AI and how to manage bias in data. Data bias is an error that overweighs or underrepresents a certain element in the dataset. When …... — appen.com, 15d ago