Latest

new Policies. Securing a social license for the use of any new, particularly disruptive technology is crucial to its success. This need will be especially acute with autonomous agents making and implementing decisions with minimal-to-no human oversight. Formal regulation may take time, but to generate and maintain widespread societal buy-in, companies should be hypervigilant in enforcing guardrails to ensure safe and appropriate use. Until regulation is in place, self-regulation, as demonstrated by numerous industries in the past, is also a responsible step that will help in the pursuit of a social license. Self-regulation on its own, however, is not a sustainable long-term solution. Organizations should be actively engaged with regulators to help craft the right approach for governing and monitoring the use of these emerging technologies.Fortune, 8h ago
new ARK’s Open Research Ecosystem seeks to capitalize on rapid change through an open approach and the convergence of insights. ARK believes that a combination of top-down and bottom-up research allows us to size the investment opportunity of disruptive innovation, and then detect and evaluate companies best positioned to benefit. To gain a deeper understanding of quickly changing themes, ARK employs an open research strategy to gather information, both helping to define and refine its internal research process. Inputs include Theme Developers, who are thought leaders in their fields, social media interactions, and crowd-sourced insights as people respond to ARK’s public research. By applying technological concepts and external inputs to traditional approaches, ARK seeks to create a more transparent, creative, and interdisciplinary investment process.Ark Invest, 8h ago
new At its heart, Intelligence for Good empowers purpose-driven companies and organizations to supercharge their social impact efforts with a set of complete, ready-made solutions that address their specific challenges by making both daily operations and longer-term strategic planning more efficient and effective. It will also help level the playing field, making ethical AI accessible to more companies and organizations of all sizes.Fortune, 8h ago
new Consider adding AI to the table. One of the most contentious project activities is the prioritization and approval of projects and portfolios. After all, projects and portfolios can represent significant resource commitments. With limited resources, there will be winners and losers in organizations, and organizational politics almost always comes into play. Perhaps organizations should invite AI to the table and become one of many evaluators and decision makers. By giving all people an equal role in shaping the data and analytics, AI can be a reasonably unbiased analyzer of project attractiveness. This can lead to less political infighting and more time to develop algorithms that best advance an organization’s decision making process.Healthcare Business Today, 1d ago
new STEP ONE: The first step for organizations to solve this problem is to focus on the effective extraction of knowledge from all available sources (i.e., Harvard Business Review, European Business Review, Employee Think-Tanks, Scholars, etc.). In doing this, organizations learn important methods of observation, extraction, and application. Observation is one of the important methods of acquiring knowledge. Recent research shows that observation alone, as a means of acquiring knowledge, can only lead to the illusion of learning among learners. Thus, without extraction and application, organizations can falsely insinuate that things be done in the same way. This causes inertia. False self-confidence, limits current employees who play an important role in gathering, storing, and disseminating future knowledge. This step breaks down the silos and opens up communication to build a knowledge management database.The European Business Review, 1d ago
new There are two aspects to algorithmic bias: human-induced and data-induced. Human-induced bias is defined as being intentional or unintentional, and these biases (like data-induced ones) are often reinforced and amplified as new iterations of algorithms accumulate past data. The data is unrepresentative and/or insufficient in the case of data-induced bias. While there are ways to combat algorithmic biases (such as taking measures to ensure that the data is diverse and has been reviewed before feeding it to medical AI tools), the fact that deep learning is a black box makes biases easily undetectable—thus making it easier to deliver discriminatory treatment (whether it is intentional or unintentional).Montreal AI Ethics Institute, 1d ago

Latest

new In today’s rapidly evolving technological landscape, the need to address a comprehensive approach to risk management is crucial at all levels of an organization. This imperative becomes even more pronounced in operational technology (OT) environments where different perspectives on risk prevail. While traditional risk management practices have primarily focused on physical and financial risks, the emergence of cybersecurity threats has necessitated a paradigm shift. Collaboration with engineering teams is not just desirable but necessary to incorporate cybersecurity risks into the risk register for OT environments.Industrial Cyber, 1d ago
new Policymakers are increasingly considering risk-based assessments of AI systems, such as the EU AI Act. We believe that in this context, AI systems with the potential for deception should be classified at least as “high-risk.” This classification would naturally lead to a set of regulatory requirements, including risk assessment and mitigation, comprehensive documentation, and record-keeping of harmful incidents. Second, we suggest passing ‘bot-or-not’ laws similar to the one in California. These laws require AI-generated content to be accompanied by a clear notice informing users that the content was generated by an AI. This would give people context about the content they are viewing and mitigate the risk of AI deception.Montreal AI Ethics Institute, 1d ago
new Sanghvi also recalls how earlier, in case of emergencies, she had to travel at night to the hospital to dictate the report to the typist. “But now, with all the data being stored on the cloud, we can do everything at home,”she says. “All doctors now have their own voice recognition passwords. I verbally dictate the report to the AI from home, and it captures everything and relays it back to the treating doctor in the hospital.”Sanghvi says more than any other branch of medicine, radiology is the foremost branch being impacted by AI. “The reason is that the information is already in a digital form, so it helps us―right from image acquisition, post processing to image interpretation and making the report,”she says.theweek.in, 1d ago

Top

The use of generative AI tools is ultimately still in its infancy and there are still many questions that need to be addressed to help ensure data privacy is respected and organizations can remain compliant. We all have a role to play in better understanding the potential risks and ensuring that the right guardrails and policies are put in place to protect privacy and keep data secure.securitymagazine.com, 13d ago
First and foremost, the reference to “sufficiently detailed summary” must be replaced with a more concrete requirement. Instead of focussing on the content of training data sets, this obligation should focus on the copyright compliance policies followed during the scraping and training stages. Developers of generative AI systems should be required to provide a detailed explanation of their compliance policy including a list of websites and other sources from which the training data has been reproduced and extracted, and a list of the machine-readable rights reservation protocols/techniques that they have complied with during the data gathering process. In addition, the AI Act should allocate the responsibility to further develop transparency requirements to the to-be-established Artificial Intelligence Board (Council) or Artificial Intelligence Office (Parliament). This new agency, which will be set up as part of the AI Act, must serve as an independent and accountable actor, ensuring consistent implementation of the legislation and providing guidance for its application. On the subject of transparency requirements, an independent AI Board/Office would be able to lay down best-practices for AI developers and define the granularity of information that needs to be provided to meet the transparency requirements set out in the Act.COMMUNIA Association, 27d ago
new The use of AI-based systems to “produce targets at a fast pace” represents a grave example of digital dehumanisation raising serious concerns around the violation of human dignity and compliance with international humanitarian law. Additionally, the potential reduction of people to data points based on specific characteristics like ethnicity, gender, weight, gait, etc. raises serious questions about how target profiles are created and in turn how targets are selected.Stop Killer Robots, 2d ago
Generative AI can offer useful tools across the recruiting process, as long as organizations are careful to make sure bias hasn’t been baked into the technology they’re using. For instance, there are models that screen candidates for certain qualifications at the beginning of the hiring process. As well-intentioned as these models might be, they can discriminate against candidates from minoritized groups if the underlying data the models have been trained on isn’t representative enough. As concern about bias in AI gains wider attention, new platforms are being designed specifically to be more inclusive. Chandra Montgomery, my Lindauer colleague and a leader in advancing equity in talent management, advises clients on tools and resources that can help mitigate bias in technology. One example is Latimer, a large language model trained on data reflective of the experiences of people of color. It’s important to note that, in May, the Equal Employment Opportunity Commission declared that employers can be held liable if their use of AI results in the violation of non-discrimination laws – such as Title VII of the 1964 Civil Rights Act. When considering AI vendors for parts of their recruiting or hiring process, organizations must look carefully at every aspect of the design of the technology. For example, ask for information about where the vendor sourced the data to build and train the program and who beta tested the tool’s performance. Then, try to audit for unintended consequences or side effects to determine whether the tool may be screening out some individuals you want to be sure are screened in.Hunt Scanlon Media, 3d ago
Another disadvantage is the potential for privacy and data security concerns. Leadership coaching sessions often involve sensitive and confidential discussions about personal and professional development. Storing these recordings or transcripts in an AI system’s database raises concerns about data breaches or unauthorized access. Maintaining strict data security measures becomes paramount to protect the confidentiality of the coaching sessions, and organizations must carefully consider the ethical implications of using AI in this context. The trust between the leader and the coaching process may be compromised if individuals are concerned about the security of their coaching data, potentially discouraging them from participating in such sessions.Education Week, 19d ago
Training of AI models requires massive amounts of data, some of which includes PII. There is currently little insight into how the data is being collected, processed and stored which raises concerns about who can access your data and how they can use it. There are other privacy concerns surrounding the use of AI in surveillance. Law enforcement agencies use AI to monitor and track the movements of suspects. While highly valuable, many are worried about the misuse of those capabilities in public spaces, infringing upon individual rights to privacy.The ChannelPro Network, 17d ago

Latest

new When organizations put appropriate AI governance standards and frameworks in place, training data, algorithms, model infrastructure, and the AI models themselves can be more closely monitored and controlled throughout initial development, training and retraining, deployment, and daily use. This contributes to a more efficient AI operation as well as compliance with relevant data privacy and AI ethics regulations.eWEEK, 1d ago
new ...“In order to capitalize on the potential of AI, and perhaps particularly GenAI in the near term, organizations need to be clearer about how they organize and provide continual validation for their supply chain data. There are very interesting and emerging opportunities to point ML (machine learning) solutions at the foundational problems of data availability and quality, and I think leaders who foster technology enablement for data governance within their supply chain will be best situated to realize further productivity gains through more exact and precise decision making (improving decision quality).” – Caleb Thomson, senior director analyst, Gartner Supply Chain practice...Connected World - IoT and Digital Transformation, 1d ago
new The AI Optimists don't make this argument AFAICT, but I think optimism about effectively utilizing "human level" models should transfer to a considerable amount of optimism about smarter than human models due to the potential for using these "human level" systems to develop considerably better safety technology (e.g. alignment research). AIs might have structural advantages (speed, cost, and standardization) which make it possible heavily accelerate R&D[1] even at around qualitatively "human level" capabilities. (That said, my overall view is that even if we had the exact human capability profile while also having ML structural advantages these systems would themselves pose substantial (e.g. 15%) catastrophic misalignment x-risk on the "default" trajectory because we'll want to run extremely large numbers of these systems at high speeds.)...alignmentforum.org, 2d ago
new Ralph Ranalli (Intro): Welcome to the Harvard Kennedy School PolicyCast. I’m your host, Ralph Ranalli. When ChatGPT and other generative AI tools were released to the public late last year, it was as if someone had opened the floodgates on a thousand urgent questions that just weeks before had mostly preoccupied academics, futurists, and science fiction writers. Now those questions are being asked by many of us—teachers, students, parents, politicians, bureaucrats, citizens, businesspeople, and workers. What can it do for us? What will it do to us? Will it take our jobs? How do we use it in a way that’s both ethical and legal? And will it help or hurt our already-distressed democracy? Thankfully, my guest today, Kennedy School Lecturer in Public Policy Bruce Schneier has already been thinking a lot about those questions, particularly the last one. Schneier, a public interest technologist, cryptographer, and internationally-known internet security specialist whose newsletter and blog are read by a quarter million people, says that AI’s inexorable march into our lives and into our politics is likely to start with small changes like AI’s helping write policy and legislation. The future, however, could hold possibilities that we have a hard time wrapping our current minds around—like AIs creating political parties or autonomously fundraising and generating profits to back political parties or causes. Overall, like a lot of other things, it’s likely to be a mixed bag of the good and the bad. The important thing, he says, is to using regulation and other tools to make sure that AIs are working for us—and even paying us for the privilege and not just for Big Tech companies—a hard lesson we’ve already learned through our experience with social media. He joins me today.harvard.edu, 2d ago
new KAREN HAO: I wanna start by unpacking the word "safety" first. And I know we've sort of been talking about a lot of different words with squishy definitions, but safety is another one of those where AI safety as OpenAI defines it, is kind of different from what we would typically think around like engineering safety. You know, there, there have been other disciplines, you know, like when we talk about a bridge being safe, it means that it holds up and it works and it resists kind of collapsing under the weight of a normal volume of traffic or even like a massive volume of traffic. With AI safety, the brand of OpenAI's AI safety, they- it is more related to this, this kind of extreme risk they have. Again, they have started adopting more of this like also focusing on current harms like discrimination, but it is primarily focused on these extreme risks. So the question I guess to to kind of reiterate is sort of like will OpenAI continue to focus on research that is very heavily indexed on extreme risks? I think so, but how are they going to change the structure to make sure that these ideological clashes don't happen again? I don't actually think that's possible, and I also think that part of what we learned from this weekend is that we shouldn't actually be waiting for OpenAI to do something about this. There will always be ideological struggles again because of this fundamental problem that we have, which is that no one knows what AGI is, no one agrees with what it is. It's all a projection of your own ideology, your own beliefs and the AI research talent pool and the broader Silicon Valley talent pool of engineers, product managers, all of those people are also ideologically split on these kind of techno-optimist versus existential-risk divides. So the, even if you try to restructure or rehire or shuffle things around, you're always going to kind of get an encapsulation of this full range of ideological beliefs within the company, and you're going to end up with these battles because of disagreements around what is actually- what are we actually working on and how do we actually get there. So I personally think that one of the biggest lessons to take away is for policymakers and for other members of the general public and consumers to recognize that this company and this technology is very much made by people. It's very much the product of conscious decisions and, and an imprint of very specific ideologies. And if we actually want to facilitate a better future with better AI technologies and AI technologies that are also applied in better ways, and it's actually up to much more than OpenAI it's up to policymakers to regulate the company, it's up to consumers to make decisions that kind of financially pressure the company to continue moving towards directions that we collectively as a society believe are more appropriate. And ultimately what this boils down to is I think like AI is such an important technology and so consequential for everyone that it needs to have more democratic processes around its development and its governance. We can't really rely on a company or a board that is, you know, tiny to represent the interests of all of humanity.Big Think, 2d ago
These projects are distinct from one another in experimental design. They share the potential, however, to revolutionize what is possible in the care of extremely preterm babies. Existing forms of neonatal care are emergency interventions. The baby is given treatments to stave off the effects of being born with significantly underdeveloped organs. The artificial womb, in contrast, extends the period of gestation to prevent these complications from arising to begin with. If it works, it will enable the infant to keep growing as though it had not yet been born. And with scientists anticipating human trials within the next few years, artificial-womb technology is no longer purely speculative. The “before” and “after” images released by the biobag team were eerie and briefly ubiquitous. In the first, a floating, pink-skinned, wrinkled lamb fetus sleeps adrift in a transparent bag. In the second, it has grown soft white wool and its body presses against the plastic surface, waiting to be born. These pictures evoke much the same reaction that people once felt when they first encountered incubators: the curious sensation of peering into the future.The Walrus, 3d ago

Top

Anxiety accompanies the projected shift in decision-making power away from people and towards AI. Worries arise about the slip of our success criteria towards what is easy for existing AI technology to deliver, rather than what reflects the best practices we want to see in medicine and elsewhere. Discussion around the risks of AI tends to focus on safety, data security, and discrimination in machine-based decisions. Ensuring privacy and accuracy can be summed up as the “right to a well-calibrated machine decision.”3 This requires transparent programming, scrutiny, and regulation to remove baked-in biases. But that is only part of the problem. Healthcare professionals fear losing influence and authority in clinical settings of the future, and there is also a real threat to the patient’s autonomy. While appropriately trained AI can outperform clinicians in a range of diagnostic tasks that will continue to increase, AI technologies that are best placed to harness the power of large datasets appear as a black box, without transparent decision-making criteria. This threatens to render impossible critical engagement for clinicians and patients with AI recommendations, which undermines professional and public trust. Don’t patients have “the right to a human decision,”4 a human opinion, or at least a human discussion?...The BMJ, 13d ago
Gupta also sees a growing need for advanced tools that can automate the detection of biases, ethical lapses, and security vulnerabilities in real time. Better Integration with AI explainability tools could provide clear insights into AI decision-making processes for both technical and non-technical stakeholders. All of this will require investment in research and development, focusing on the intersection of AI technology, cybersecurity, and ethics.diginomica, 12d ago
As we embark on this enlightening journey, it is imperative for practitioners in the agriculture sector to sharpen their focus on the burgeoning intersection of technology and farming. The integration of artificial intelligence (AI) into agriculture has the potential to revolutionize the way we produce, distribute and consume food, offering both unprecedented opportunities and significant risks. This month-long exploration aims to shed light on why practitioners must be proactive in navigating the evolving landscape of AI in agriculture, particularly in low- and middle-income countries, where the impact can be transformative.Agrilinks, 3d ago

Latest

As Webster concludes, ultimately, humans should be accountable for AI. “While some people talk about giving AI systems legal rights, accountability must rest with those who make decisions about AI use and deployment. It's the responsibility of humans to ensure that AI systems are governed correctly, that biases are addressed, and that ethical considerations are upheld. Having strong data and AI governance practices in place helps uphold accountability by guiding the responsible use of AI.”...technologymagazine.com, 3d ago
Imagine a scenario where you have a large dataset of financial transactions. With classical computing, analyzing this data would take a significant amount of time and resources. However, with Quantum AI, the parallel processing power of qubits allows for a much faster and more efficient analysis. This means that financial institutions can quickly identify patterns and trends in the data, leading to more accurate predictions and informed decision-making.Techiexpert.com, 3d ago
There’s currently much debate within AI circles about whether AGI systems are inherently dangerous. Some researchers believe that AGI systems are inherently dangerous because their generalized knowledge and cognitive skill will permit them to invent their own plans and objectives. Other researchers believe that getting to AGI will be a gradual, iterative, process in which there will be time to build in thoughtful safety guardrails at every step.Fast Company, 3d ago
AI-powered surveillance systems are gradually becoming common and with this most of us have started to worry with respect to privacy and safety. For example, in China, they use facial recognition technology to watch people closely while in the United States the police use algorithms to predict where crimes might happen. These technologies could violate the personal freedoms of people and make inequalities in society even worse. In simple words, they might invade our privacy and make the gaps between different groups of people even bigger.Techiexpert.com, 3d ago
Goal: We're working on a new system that makes it easier for artificial intelligence to understand what's important to you personally, while also reducing unfair or biased decisions. Our system includes easy-to-use tools that help you identify and mark different situations where the AI might be used. These tools use special techniques, like breaking down text into meaningful parts and automatically labelling them, to make it simpler to create settings that are tailored to you. By doing this, we aim to address the problem of AI not fully grasping people's unique backgrounds, preferences, and cultural differences, which can sometimes lead to biased or unsafe outcomes.alignmentforum.org, 3d ago
As we peer into 2024, a tapestry of uncertainties at the intersection of business and technology looms large. With its transformative yet controversial potential, the generative AI hype continues to shape industries. Economic uncertainty remains a constant companion, injecting dreaded unpredictability into the business world. The widening skill gap and labor challenges are pressing concerns demanding more aggressive innovative solutions. These macro trends are trickling downstream, impacting organizations of all sizes across industries. Our predictions for 2024 shed light on how these complex forces manifest in enterprise software.Best ERP Software, Vendors, News and Reviews, 3d ago

Top

Decision making and problem solving: How does predictive AI augment managers’ cognitive processes and decision-making strategies? In what ways does generative AI alter the nature of problem solving in organizations? Can AI help identify and formulate problems? How does the presence of AI affect the biases and heuristics observed in human decision-making processes within organizations? To what extent does the generation of new data solve AI’s traditional challenges of data availability and quality in decision making?...AOM_CMS, 4d ago
...⦁ DATA: Organisations cannot neglect the importance of having data ‘AI-ready’. While data serves as the backbone needed for AI operations, it is also the area where readiness is the weakest, with the greatest number of Laggards (9%) compared to other pillars. 73% of all respondents claim some degree of siloed or fragmented data in their organisation. This poses a critical challenge as the complexity of integrating data that resides in various sources and making it available for AI implications can impact the ability to leverage the full potential of these applications.CRN - India, 18d ago
Bias in AI Decision-Making. AI's skewed understanding could influence how it responds to queries or develops ethical frameworks, potentially leading to decisions that don't align with a balanced human perspective.Psychology Today, 8d ago
Firstly, let us define what bias means in the context of AI models. Bias refers to the unequal treatment or favoritism towards one group over another. In AI models, this can manifest as discrimination against certain individuals based on their race, gender, age, or other characteristics. This can have a significant impact on decision making processes, leading to unfair outcomes and perpetuating societal inequalities.WriteUpCafe.com, 27d ago
Transparency in AI processes is paramount. As of 2023, AI explainability tools like LIME and SHAP are gaining traction. We must demand that AI systems provide insights into their decision-making, especially in critical areas like healthcare and finance. Responsible AI isn’t a buzzword; it’s a necessity for the future. As we embrace AI’s potential, let’s do so responsibly, ensuring it benefits all of humanity. Together, we can build a future where AI serves as a force for good, shaping a better world.globaltechcouncil.org, 28d ago
Yeah. So I believe very strongly, that we will have a lot more automated decision making in lending. It’s not to say that certain decisions won’t still require manual review or won’t still require a second set of eyes, but automated decisioning needs to proliferate further than it already has. And that’s going to happen across different product lines. But what I think is really important, and this goes to the future of AI and credit and other places, is that the types of systems that are going to win, that are going to provide the most value to customers are systems that allow for input from ultimately multiple sources. So that could be data as one source, but also humans, who…Machine learning is really good at eating data and finding insight. Humans are really great at applying context to that data, information that is outside of the data elements. So I believe if you will, the AI of the future, especially for regulated use cases, but I think it for other use cases as well as the public awareness of AI system grows as we get new regulation likely coming over and kind of following a lot of the regulation that we’ve seen in Europe, and we’ve already seen the initial stride with that with 1033, there’s going to be a real focus on how do I understand what is happening, not just from data, but also from people? Combine those two into one automated system, and ensure that I can tell the FI, or the other type of business can tell their customer on the other side, what the heck happened? How was this decision made? What information was used? How can I help you get to a different decision, which I continue to believe is a huge opportunity for a case where you have a negative outcome? How do you build a relationship with that customer to help them get to a positive outcome? You know, it’s going to be it’s going to be AI systems that can do that, that are going to actually deliver on all of the promise and all of the value that we hear about in all the newspapers.Zephyrnet, 11d ago

Latest

The opaqueness in the decision-making process of LLMs like GPT-3 or BERT can lead to undetected biases and errors. In fields like healthcare or criminal justice, where decisions have far-reaching consequences, the inability to audit LLMs for ethical and logical soundness is a major concern. For example, a medical diagnosis LLM relying on outdated or biased data can make harmful recommendations. Similarly, LLMs in hiring processes may inadvertently perpetuate gender bi ases. The black box nature thus not only conceals flaws but can potentially amplify them, necessitating a proactive approach to enhance transparency.unite.ai, 3d ago
...stranger: You know how Pat ended up calculating that there ought to be 1,000 works of Harry Potter fanfiction as good as Methods? And you know how I got all weepy visualizing that world? Imagine Maude as making a similar mistake. There’s a world in which some scruffy outsider like you wouldn’t be able to estimate a significant chance of making a major contribution to AI alignment, let alone help found the field, because people had been trying to do serious technical work on it since the 1960s, and were putting substantial thought, ingenuity, and care into making sure they were working on the right problems and using solid methodologies. Functional decision theory was developed in 1971, two years after Robert Nozick’s publication of “Newcomb’s Problem and Two Principles of Choice.” Everyone expects humane values to have high Kolmogorov complexity. Everyone understands why, if you program an expected utility maximizer with utility function 𝗨 and what you really meant is 𝘝, the 𝗨-maximizer has a convergent instrumental incentive to deceive you into believing that it is a 𝘝-maximizer. Nobody assumes you can “just pull the plug” on something much smarter than you are. And the world's other large-scale activities and institutions all scale up similarly in competence.lesswrong.com, 3d ago
This research agenda focuses on self-improving systems, meaning systems that take actions to steer their future cognition in desired directions. These directions may include reducing biases, but also enhancing capabilities or preserving their current goals. Many alignment failure stories feature such behaviour. Some researchers postulate that the capacity for self-improvement is a critical and dangerous threshold; others believe that self-improvement will largely resemble the human process of conducting ML research, and it won't accelerate capabilities research more than it would accelerate research in other fields.lesswrong.com, 3d ago

Latest

Soltani said the agency's board and the public will have opportunities to provide input on the proposed rules starting next month. The guidelines are meant to clarify how the 2020 California Consumer Privacy Act — which addressed a range of electronic and online uses of personal information — should apply to decision-making technology.The proposal also outlines options for how consumers' personal information could be protected when training AI models, which collect massive data sets in order to predict likely outcomes or respond to prompts with text, photo and video.OpenAI and Google already have been sued over their use of personal information found on the internet to train their AI products.GovTech, 3d ago
The concept of misinformation has deep historical roots. Throughout various epochs, from ancient civilizations to the modern digital age, misinformation has consistently influenced human communication. This includes the distortion of facts in pre-print societies' oral storytelling (Burkhardt, 2017) through to contemporary digital information warfare conducted between states (Karpf, 2019). In academic research, the two World Wars played a pivotal role in shaping early scholarship around the topic. Following World War I, academic research in propaganda studies analysed the techniques employed during the war and their societal impact (Bernays, 1928; Lasswell, 1927). The post-Second World War era witnessed an increased focus on academic research regarding rumours, acknowledging their significant impact on shaping public perceptions and attitudes towards the war effort. This phenomenon garnered particular attention within the field of social psychology, as researchers sought to gain a deeper understanding of the psychological processes involved in the propagation of rumours (Allport & Postman, 1947). Since the late 20th century, the advent of information and communication technologies has greatly propelled contemporary research on misinformation. Within this burgeoning field, scholars have dedicated significant efforts to investigate the intricate role of digital communication technologies in shaping the multifaceted landscape of misinformation (e.g. Marres, 2018; Napoli, 2019; Tufekci, 2018).Internet Policy Review, 3d ago
In addition, AI can be used to build better models, and to speed up simulators by leveraging more artificial neural networks. “There are physics-based analytical models, but you’ve got to take lots of measured data in order to get there,” said Slater. “What if you could use AI heuristic networks to create a model that’s just as good at curve-fitting, but crucially, doesn’t need as much input data, and can execute much faster because the model is based on neural nets, not complex netlists?”...Semiconductor Engineering, 4d ago
However, data is the fuel of Artificial Intelligence. AI needs vast amounts of data in order to start generating usable insights. Large Language Models (LLM) are a subset of AI where the algorithm is designed to learn from tremendous amounts of diverse data to generate new multimodal content including text, image, audio, video, code, and 3D—hence generative AI. Without the algorithm, big data is just noise. And without data, the algorithm is irrelevant.Datanami, 4d ago
Because this may involve sensitive customer data, you’re choosing to isolate this workload from other models and host it on a single-model endpoint, which can make it challenging to scale because you have to spin up and manage separate endpoints for each FM. The generative AI application you’re using the model with is being used by service agents in real time, so latency and throughput are a priority, hence the need to use larger instance types, such as a P4De. In this situation, the cost may have to be higher because the priority is isolation, latency, and throughput.CoinGenius, 3d ago
Regulators need to implement guidelines that help standardize the language around AI, which will help with understanding the model being used, and ultimately regulate risk parameters for these models. Otherwise, its potential to take the exact same data set, and draw wildly different conclusions based upon its biases—conscious or unconscious—that are ingrained from the outset. More importantly, without a clear understanding of the model, a business cannot determine if outputs from the platform fit within its own risk and ethics criteria.TechRadar, 4d ago

Latest

However, these barriers are not insurmountable. It’s promising that both Congress and the White House acknowledge the profound societal and economic implications of AI in the US, demonstrating a sense of urgency in comprehending and mitigating potential harms through bipartisan efforts. Directing this momentum to increasing funding and supporting the recruitment of subject matter and industry experts to regulatory agencies would position the US as a leader in safeguarding against AI risk, while promoting technological innovation.Tech Policy Press, 4d ago
AI tools are particularly valuable when a large amount of clean data is available and where connections and correlations between those data sets can be mapped relatively easily. With this in mind, in the Biomaterials space where Modern Meadow operates, AI tools will be helpful in many ways. First, supporting consumers when navigating overwhelming amounts of information on product and material. AI tools will allow them to easily find the most sustainable options while avoiding being swayed by fake news.Fast Company, 4d ago
Multimodal AI and AI simulation will reach new frontiers“The integration of text, images and audio into a single model is the next frontier of generative AI. Known as multimodal AI, it can process a diverse range of inputs simultaneously, enabling more context-aware applications for effective decision making. An example of this will be the generation of 3D objects, environments and spatial data. This will have applications in augmented reality [AR], virtual reality [VR], and the simulation of complex physical systems such as digital twins.”– Marinela Profi, AI/Generative AI Strategy Advisor, SAS...MarTech Series, 4d ago
Azam Sahir, Chief Product Officer at MongoDB, reiterated the value that this partnership holds for its customers. "Customers of all sizes from startups to enterprises tell us they want to use generative AI to build next-generation applications and future-proof their businesses," said Azam. "Many customers express concern about ensuring the accuracy of AI-powered systems' outputs whilst also protecting their proprietary data. We're easing this process for our joint-AWS customers with the integration of MongoDB Atlas Vector Search and Amazon Bedrock. This will enable them to use various foundation models hosted in their AWS environments to build generative AI applications, so they can securely use proprietary data to improve accuracy and provide enhanced end-user experiences."...ChannelLife Australia, 4d ago
...• Bring: Boards on board. Unless board members understand GenAI and its implications, they will be unable to judge the likely impact of a company’s AI strategy and related decisions regarding investments, risk, talent, and technology on their stakeholders. “Our conversations with board members reveal that many of them admit they lack this understanding,” McKinsey says.DATAQUEST, 4d ago
Artificial intelligence (AI) may identify patterns, correlations, and variances in data sets that human analysts would miss. This precision aids in the creation of fact-based judgments. The marketing team can gain deeper insights into consumer behavior, preferences, and interactions by utilizing AI-driven data. Their decision-making is well-founded due to their precision, which virtually eliminates any room for error.MarTech Series, 4d ago

Latest

The real problem is that consumers themselves are the ones on the hook. Legislation can keep open-source LLMs in check because their models grow from publicly available data, but won't have the reach to regulate ones whose growth depends on data collected privately, especially as the technology balloons across the industry. Whenever a customer uses an open-source LLM, their search history, in-app behavior, and identifying details are logged in service of further educating the AI. That quid pro quo isn't obvious to users, and this means best practices have to be created and enforced by the vendors themselves — a questionable proposition, at best.diginomica, 4d ago
Image recognition – Multi-modal AI can precisely identify objects, persons, and activities through the analysis and interpretation of visual data, including photos and videos. Technologies that rely on image and video analysis have developed largely thanks to the ability to analyze visual information. Improved security systems with person identification capabilities and the ability for self-driving cars to perceive and react to their environment are some of its examples.MarkTechPost, 4d ago
AI can fuel the ideas of more education entrepreneurs. Steve Jobs said that the personal computer was like a “bicycle for the mind,” but in the hands of education entrepreneurs, AI is more like an airplane, allowing them to do more and higher-quality work than was previously possible. Studies show that good management matters, including in schools—and AI tools are poised to make high-quality management advice more accessible to the masses. In a recent study, researchers at Harvard Business School showed that access to GPT-4 helped consultants at Boston Consulting Group produce higher-quality work and also leveled the playing field between lower and higher performers. If education entrepreneurs can leverage AI tools in a similar way, many more people with a passion to educate children will be able to generate strategic plans, pressure-test their curricular and pedagogical ideas, and build other key infrastructure needed to move from an idea to a more fully-formed organizational concept.The Thomas B. Fordham Institute, 4d ago

Top

However, for less capable AI systems, ones not powerful enough to run a good utilitarian value function, a set of deontological ethical heuristics (and also possibly-simplified summaries of relevant laws) might well be useful to reduce computational load, if these were carefully crafted to cover the entire range of situations that they are likely to encounter (and especially with guides for identifying when a situation was outside that range and it should consult something more capable). However, the resulting collection of heuristics might look rather different from the deontological ethical rules I'd give a human child.alignmentforum.org, 26d ago
Errors in input data uncorrelated to outcomes (e.g. random lens flare in image data) become critical in applications where inaccuracy is only acceptable within tight margins (e.g. self-driving vehicles), and much less so if AI is used as a complementary system that recommends actions to a human decision maker. Further, applications have varying tolerance for type I and type II errors (Shafer and Zhang 2012: 341). Consider the trade-off between freedom of speech and illegal content in content moderation systems. Avoiding type I errors (not removing illegal content) seems preferable to avoiding type II errors (removing lawful content). However, these prediction errors can only be classified if the ground truth is at least approximately known.CEPR, 12d ago
Distributed Intelligence, on the other hand, represents the analytical prowess of AI systems that excel in processing vast amounts of information, identifying patterns across multiple data sets, and providing consistent, objective analysis. It extends human capability by handling tasks that are too cumbersome or complex for the human brain, offering scalability and efficiency in problem-solving. This pole is crucial for making sense of big data, enabling predictive analytics, and supporting decision-making processes that benefit from a lack of emotional bias and the ability to synthesize diverse perspectives into coherent patterns.Integral Life, 27d ago
At its core, the essence of transparency policies places the responsibility for alleviating (ethical) harm on the individual.15 Going back to the examples above, it is the individual who is expected to recalibrate their buying habits, change consumption choices, or even stop their habits in response to energy labels, sugar, and calorie content information, and alerts about smoking risks. A similar assumption holds for AI transparency policies. Here, it is the user who engages with AI systems, such as LLMs, who is responsible for adapting their behavior once they know they interact with AI. Take popular AI language models like ChatGPT or Google’s Bard. Such models come with a warning about the AI producing potentially “inaccurate or offensive information,” assuming that users will factor it into their decision-making. Yet, our findings show that such disclaimers will likely not help when someone is motivated to leverage AI output to further their self-interests.psychologytoday.com, 6d ago
With technological advancements, financial lenders can now lower their risk by utilizing a variety of client data. Relevant data is analyzed and distilled into a single value known as a credit score that represents the lending risk using statistical and machine learning. A lender might be more confident in a customer’s creditworthiness the higher their credit score. Credit scoring, a type of AI technology based on predictive modelling, estimates the probability that a customer will miss a transaction, become overdue, or be insolvent. The time it takes to assess a company’s financial situation is reduced by automated credit decision-making systems made possible by data-driven AI technologies. Examining a larger number of data points for a shorter period of time and producing quicker credit scores, enables closer monitoring of its actions and creditworthiness.nasscom | The Official Community of Indian IT Industry, 17d ago
AI can suggest, recommend, or advise, but the ultimate responsibility for legal decisions rests squarely on your human shoulders. AI is a tool to enhance your decision-making processes, not one that replaces your legal analysis and reasoning. From the foundational stage of AI adoption forward, you must exercise your professional judgment and legal expertise to critically evaluate AI outputs, keeping legal and ethical obligations in mind.Above the Law, 7d ago

Latest

In LLMOps the main differences compared to MLOps are model selection and model evaluation involving different processes and metrics. In the initial experimentation phase, the data scientists (or fine-tuners) select the FM that will be used for a specific Generative AI use case.This often results in the testing and fine-tuning of multiple FMs, some of which may yield comparable results. After the selection of the model(s), prompt engineers are responsible for preparing the necessary input data and expected output for evaluation (e.g. input prompts comprising input data and query) and define metrics like similarity and toxicity. In addition to these metrics, data scientists or fine-tuners must validate the outcomes and choose the appropriate FM not only on precision metrics, but on other capabilities like latency and cost. Then, they can deploy a model to a SageMaker endpoint and test its performance on a small scale. While the experimentation phase may involve a straightforward process, transitioning to production requires customers to automate the process and enhance the robustness of the solution. Therefore, we need to deep dive on how to automate evaluation, enabling testers to perform efficient evaluation at scale and implementing real-time monitoring of model input and output.CoinGenius, 4d ago
Moshe observed that when performing processor design back in the 70’s, designers weren’t referred to as tall thin engineers. They typically did architecture, micro architecture, algorithms, logic design, circuit design and sometimes even layout. Very broad in their ability to do things, but not as deep. Moshe said that he was an expert in the use of multi-dimensional Karnaugh maps and was a colleague of Zvi Kohavi, and then came Synopsys. It put him out of business and had to go into management. He said that this overall was a 30-year transition, and AI will do this in a third to a fifth of the time. AI will unfold at a much faster rate and be more impactful. An inhibitor will be an availability of data, but he expects that there will be a big donation and it will be fascinating to see how it unfolds.Semiconductor Engineering, 4d ago
...“AI has the capacity to impact healthcare delivery in a positive way. There are currently many AI applications specific to radiology that show promise in terms of enhancing workflows and streamlining medical imaging procedures,” said Dr. Chong. “Many of these technologies have been integrated into healthcare systems outside of Canada with favourable results. However, before broadly introducing them to the Canadian healthcare system, it is essential that a national regulatory framework has been developed which includes expert oversight to maximize safety and value.”...Hospital News, 4d ago
Looking ahead, it is evident that AI will play an increasingly vital role in procurement. As AI technologies continue to advance, they will offer even more sophisticated tools for data analysis, process automation, and strategic decision-making. The future of procurement will likely see a blend of human expertise and AI capabilities, where procurement professionals leverage AI to enhance their strategic and analytical skills.electronicspecifier.com, 4d ago
The IEEE recently conducted a study, finding that 65% of CTOs and CIOs believe AI will be the most important technology next year and will be used in diverse ways across the global economy. Leaders also reported that they will be focusing on AI applications and algorithms that can optimise data, perform complex tasks and make decisions with human-like accuracy. Potential applications include:...electronicspecifier.com, 4d ago
...“Early public and private AI solutions have yielded significant financial rewards for early adopters,” said Brian Davidson, Congruity360 Chief Executive Officer and Managing Partner. “As the impact of AI projects scale, it is essential that the fuel of AI, aka data, is correctly classified and de-risked. PII and private corporate data must be classified out of AI training models. Congruity360 is uniquely positioned to offer AI a simple, fast, and automated classification engine. Do not ruin the rewards of AI by unknowingly feeding it risk and obsolete data.”...prnewswire.com, 4d ago

Top

Artificial intelligence has become an indispensable tool in data visualization. By automating data processing, identifying patterns, and generating insightful visualizations, AI enhances our ability to gain valuable insights from data. However, it is vital to acknowledge the challenges and limitations of AI and follow best practices to ensure the accuracy and reliability of the visualizations. As AI continues to evolve, we can expect more advanced and intelligent data visualization tools to revolutionize industry decision-making processes.Science Times, 26d ago
Safety: AI systems should not, under defined conditions, lead to a situation that endangers human life, health, property or the environment. GenAI tools can represent notable risks to public health and safety, whether from malicious intent or just lack of quality control. Their scaling capabilities have the potential to be used to spread misinformation and disinformation; and in sensitive domains like health care and public safety, GenAI should be evaluated to determine whether it’s necessary and beneficial, and given careful governance to mitigate risk.GovTech, 5d ago
The ability of AI to convert Data into actionable insights can help organizations from Better Efficiency, Customer Service, alignment on company-wide goals, transparency to generating new revenue models. The Power and Scale that AI has for Business and Governments has been proven, and the excitement of Generative AI has taken it to a different level. As organizations use different data sources and datasets to Artificial Intelligence, it’s important to have the right guardrails in place to ensure data quality, governance compliance, and transparency within your AI systems.Analytics Insight, 12d ago

Latest

What, he asks, would be the role of book publishers in a fully digitized environment organized in accordance with open access? After the cost of a book’s first copy is covered, an unlimited supply of subsequent copies, provided directly from an electronic repository, would be virtually free. The publisher’s function as a gatekeeper would cease to exist because there would be no more gates—and therefore an end to “the university world’s thralldom to the prestige hierarchy of the established publishing venues.” Acquisitions editors would be superfluous. Already they do little more than weeding out inferior texts and “reading around on company time,” Baldwin claims. Improved search engines would handle the selection process, taking readers right to the works they want, which would be available on the global bulletin board. Peer review could therefore be eliminated, along with unnecessary apparatuses such as professionally designed layout, indexes, dust jackets, blurbs, and sales catalogs. Most bookstores would disappear; libraries would be reduced to storehouses of old-fashioned volumes; and virtually all cultural intermediaries—book reviewers, literary agents, advertisers—would be eliminated, because their functions would be replaced by the all-powerful search engines putting readers in direct communication with texts in the all-encompassing cloud.The New York Review of Books, 4d ago
At its big cloud computing shindig up in Las Vegas, Amazon’s AWS division finally announced its entry in the AI chatbot wars. Oddly enough, the new LLM-based chatbot is called “Amazon Q.” Unlike Google’s Bard and OpenAI’s ChatGPT, Amazon Q is not intended for the general public. Rather, the bot is designed for workers within large enterprises that need AI assistance to access and synthesize their company’s corporate data. For many companies, all that data is stored in the AWS cloud, with AWS guaranteeing their data is secure. Security is the reason many companies have been hesitant to use chatbots that weren’t designed with businesses in mind (like the consumer version of ChatGPT); they fear that a third-party chatbot might leak the data or put it in the wrong hands internally. AWS customers will likely trust it to keep data safe, and the assistant can use the permissioning system that the customer company already has set up to govern which employees get access to various types of data.Fast Company, 4d ago
Generative AIs are exceptional applications for helping with incident reaction. By building workflows that involve AI insights to examine payloads related with incidents, the suggest time to take care of (MTTR) of incidents can be noticeably decreased. It can be critical to use retrieval augmentation in these scenarios, as it really is probable unachievable to prepare a model to account for each attainable circumstance. When you utilize retrieval augmentation to more external information resources, these as threat intelligence, you achieve an automated workflow that is correct and is effective to remove hallucinations.The Cyber Security News, 4d ago
...1, 2 months, a, ability, able, About, accelerating, access, accessible, accuracy, across, Act, Action, ADD, Added, Adding, addition, ADvantage, AI, AI applications, AI assistant, AI-Powered, All, also, Amazon, Amazon Aurora, Amazon DynamoDB, Amazon ElastiCache, Amazon Neptune, Amazon OpenSearch, Amazon OpenSearch Service, Amazon QuickSight, Amazon RDS, Amazon Redshift, Amazon S3, Amazon SageMaker, Amazon Web Services, among, amounts, an, analysis, Analysts, analytics, and, announced, another, any, APIs, Application, applications, apply, ARE, AREA, AS, ask, assist, Assistant, At, augmented, aurora, Author, authors, automatically, availability, available, AWS, aws re:Invent, Azure, BE, been, being, beneficial, better, between, beyond, BI, break, break down, bristol, broad, brought, build, Building, Built, business, business data, business intelligence, business needs, But, by, CAN, Can Get, can help, capabilities, capability, case, cases, Catalog, choice, clicks, Cloud, cloud data, cloud data warehouses, CO, code, coming, coming soon, Command, Companies, compatibility, compatible, compelling, complete, complex, complexity, compliant, comprehensive, compromising, Compute, concurrent, confident, connect, connecting, consumer, consumer finance, ConTeXt, contextual, contextually, continuously, Cost, course, create, created, creativity, critical, Current, curve, Customers, customize, customizing, Cycle, dashboard, Dashboards, data, Data Across, data analysts, data catalog, data integration, data lake, data lakes, data management, data movement, data privacy, data silos, Data Sources, data warehouse, data warehouses, Database, databases, Datasets, date, day, Days, deliver, deliver personalized, delivering, delivers, Demand, demanding, descriptions, Development, different, differentiate, differentiator, Digit, digital, Digital media, dimensions, discover, Does, down, driven, dynamic, DynamoDB, earlier, easier, Easily, easy, Edition, editor, efficiency, Eliminate, enable, enabling, end, end-to-end, Engineers, engines, enhancements, ensure, Enterprise, entire, especially, ETL, Even, events, everyone, everywhere, example, exciting, existing, expanded, explored, extract, Facebook, facets, factoring, family, faster, Feature, Features, feel, few, few clicks, files, Finally, finance, financial, fine, First, Flexibility, fms, focused, For, for example, For You, Forward, found, Foundation, Founded, four, from, Fuel, fully, functionality, future, G2, General, generation., generative, Generative AI, generic, geolocation, Get, going, govern, governed, Have, he, Health, heavy, heavy lifting, Help, help you, helped, helps, High, high-quality, highly, his, history, HOURS, Humans, Hundreds, if, improve, improvements, in, include, includes, Including, index, information, ingestion, innovate, Innovation, Innovations, insights, instance, instantly, integrated, Integrating, integration, integrations, Intelligence, Intelligent, interface, into, introduced, Intuit, intuitive, invested, investing, IP, Is, IT, ITS, Job, join, JSON, just, Key, Keynote, knowing, knowledge, lakes, language, Last, Last Year, launched, leading, LEARN, learning, Leverage, leveraging, lifting, like, List, lives, Llama, load, Look, machine, machine learning, Made, maintained, maintaining, make, Makes, Making, managed, management, manages, many, Matter, Media, meet, Meta, metadata, Microsoft, million, Millions, millisecond, ML, model, models, Modern, Monday, MongoDB, monitoring, months, more, most, Most Popular, movement, moving, multiple, mysql, Natural, Natural Language, Nature, Near, necessary, Need, needs, Neptune, never, new, new feature, Newest, no, non-relational, Now, number, obvious., of, Of course, offer, Offerings, Offers, on, ONE, only, operate, operating, Operational, operational data, Optimizations, optimize, Option, Options, or, organization., organize, Other, our, our data, outcomes, Over, Own, part, partners, Parts, patterns, per, perfectly, perform, performance, personalized, pioneer, pipelines, place, platform, plato, Plato Data Intelligence, PlatoData, Popular, portfolio, possibilities, possibility, PostgreSQL, Powered, Preview, price, Prior, privacy, proactively, produce, Production, productive, Programming, Programming tools, promotions, prompt, proven, provide, quality, quality data, query, quickly, quite, ran, range, RDS for MySQL, RE, read, real, real value, real-time, reasons, redis, reduce, relational, relational database, relational databases, relationship, Released, relevant, Relevant Information, remove, Requirements, requires, retail, retrieval, retrieve, return, Right, role, Run, s, SageMaker, same, Scalability, Scale, scales, scaling, scientists, sdks, Search, second, seconds, Secure, securely, see, seeing, selection, server, Serverless, Service, Services, set, Set Up, Share, silos, Simple, simplicity, single, small, small business, So, some, soon, sources, spanning, spans, specific, speed, Spoke, SQL, SQL server, Stage, storage, store, stored, stores, Stories, strong, success, suited, support, Supports, Take, take advantage, targets, tax, Technologies, Technology, tens, text, Than, that, that’s, The, The Future, their, There, there’s, These, they, think, this, this year, those, thousands, three, Through, throughout, time, times, titan, to, Today, together, told, tools, top, Transactions, Transform, tune, tuning, turn, two, types, Types Of Data, typically, understand, unique, unleash, Unlocking, up, up-to-date, upon, us, use, use case, Used, users, uses, using, value, variability, Various, Vast, ve, vector, View, vision, visual, VP, want, Warehouse, warehouses, was, way, ways, we, web, web services, week, WELL, went, What, When, where, Which?, while, WHO, Why, will, with, within, without, Woolworths, Work, worked, workflow, workflows, working, workloads, works, worry, write, writing, written, year, You, Your, your business, zephyrnet, zero...Zephyrnet, 4d ago
All that said, if we look past 2024, most marketers are underestimating the absolutely transformational impact AI will have on go-to-market strategies and platforms. In the future, text-based conversational user interfaces will automatically identify the right audience for any campaign, eliminating the need for marketing operations expertise in data and complex segmentation rules. AI will also enhance the effectiveness of existing campaigns by streamlining A/B testing and segmentation, and will transform broadcast-like campaigns into engaging conversations, analyzing and reacting to user responses appropriately.OpenView, 4d ago
Are these conversations to truly discuss the risks and safety regarding AI or is there a larger agenda at play? It is no secret that both the U.S. and China are leveraging AI but to what extent may remain a mystery. Whether or not both nations come to a consensus on the application of AI, both must consider the consequences (good or bad) surrounding the use of AI. AI has proved to ease the burden of repetitive data processing tasks. However, the illuminated potential of AI extends beyond conventional applications, gaining notable attention in fields such as space, intelligence, and cyber. It is imperative to approach these technologies cautiously, as the unchecked deployment of AI may evolve into a influential and clandestine instrument, capable of exerting prompt and destructive effects on a global scale. The repercussions of such employment may remain undetectable until their manifestation reaches an irreversible magnitude, highlighting the critical necessity for strategic caution and preemptive measures in the utilization of AI across diverse sectors.Modern Diplomacy, 4d ago

Top

Explainable AI refers to the capability of AI systems to provide clear and understandable explanations for their decisions and actions. In traditional machine learning models, particularly complex ones like deep neural networks, the decision-making process can be opaque, often referred to as the “black box” problem. Explainable AI aims to lift this veil, making AI systems more interpretable and trustworthy.WriteUpCafe.com, 20d ago
Additionally, the President calls on Congress to pass federal data privacy protections, and then through the EO’s Section 9 directs agencies to do what they can to protect people’s data privacy without Congressional action. The section opener calls out not only “AI’s facilitation of the collection or use of information about individuals,” but also specifically “the making of inferences about individuals.” This could open up a broader approach to assessing privacy violations, along the lines of “networked privacy” and associated harms, which considers not only individual personal identifiable information but the inferences that can be drawn by looking at connected data about an individual, or relationships between individuals.Brookings, 18d ago
...“In a tech landscape abuzz with the potential of generative AI, understanding its power and pitfalls is a must. Organizations across every vertical are looking into how to leverage this technology to get ahead or simply ensure they can keep up with the latest and greatest advancements. Rushing implementation and forgoing the necessary learning process isn’t going to be worth the investment. A thoughtful, concerted approach that is rooted in data is essential for generative AI. To truly reap the advantages of AI investments, organizations must take a close look at their data strategy and ensure they are prepared for success. If using it to solve specific business problems is the goal, it’s key to consider how the training process for those models is done and examine the underlying foundation of data that you’re supplying in order to get the result you need.insideBIGDATA, 11d ago
One common concern when implementing AI in sales is the fear of job displacement. However, it’s important to emphasize that AI enhances human capabilities rather than replacing them. By automating repetitive tasks and providing data-driven insights, AI enables sales professionals to focus on building relationships with customers and making strategic decisions. This collaboration between humans and AI can lead to increased productivity and better outcomes for both the business and its employees.ValiantCEO, 17d ago
Leverage predictive analytics and AI for strategic decisions. Take advantage of SAP Business AI and tools like the generative AI Assistant Joule to gain predictive insights and enhance decision-making. With the pressure to lower costs and increase process efficiency being a key driver for nearly a third of survey respondents, these tools can provide the necessary analytics to inform cost-saving and investment strategies.SAPinsider, 29d ago
No one fully understood how smartphones or social media would transform every aspect of our life in the span of fifteen years. AI is a dynamic field, and its impact on education is beyond what any of us could probably comprehend today. The only way we can keep up is by building strong guardrails and regularly assessing and evaluating the extent to which AI tools are enhancing educational outcomes. We must also constantly anticipate and respond to unintended consequences as they emerge. This should include information from academic assessments, surveys, and feedback from teachers and students. The data collected should be used to refine AI implementation strategies and inform policy decisions.The Thomas B. Fordham Institute, 21d ago

Latest

As organizations continue to navigate the complex landscape of digital transformation, securing workload identities is non-negotiable. The implementation of multi-factor authentication, particularly through mechanisms like mTLS, is a proactive step towards mitigating evolving cyber threats. By understanding the risks, overcoming reluctance, and embracing modern security measures, businesses can fortify their defenses, protecting not only their assets but also their reputation in an era where data breaches are not just a possibility but a harsh reality.Security Boulevard, 4d ago
Until AI advances to the point where it can actually think for itself, understand, and exhibit something that more closely resembles human-like intelligence and common sense, it will remain a tool – albeit a very impressive and sophisticated tool – which can be used for good or bad, depending on the intentions of its human users or perhaps the unintended consequences of its design. When machines can think for themselves, we can expect them to be more benign than people.RTInsights, 4d ago
There’s also the way we find love and romance. Already, AI dating tools are aping online dating, except the person on the other end isn’t a person at all. There’s already one company that has AI doing the early-stage we-met-on-Tinder thing of sending flirty texts and even sexy selfies, and (for a fee) sexting and segueing into being an online girlfriend / boyfriend. Will most people prefer to the warm glow of a phone screen to an actual human? I don’t think so. But enough will to, I suspect, cause a lot of problems. Because while on the one hand it seems find if under-socialized humans can find some affection with a robot, I question whether directing an under-socialized person’s already-limited social skills to a robot designed to always be amenable and positive and accommodating and compliant really serves that person, or anyone who has to be around that person. Interacting with other human beings can be challenging. That’s part of the point: It’s how we learn to regulate our own emotions and consider those of others; it’s how we start to discern which people are our people; it’s how we learn to compromise and negotiate and building layered and meaningful relationships. You can’t get that from AI. But what you can get is a facsimile of a human being who more or less does what will make you happy in the moment—which, again, is not at all a recipe to be happy in the aggregate.Ms. Magazine, 5d ago

Latest

...“Part of this process has been getting to grips with putting a formal AI strategy in place. Our research found that 39% of organisations now have a formalised plan for using A – which is undoubtably influenced by a desire to embrace GenAI tools like ChatGPT. But this number should be higher, and a year on from ChatGPT we should be seeing AI approached more strategically, and with strong governance in place. ChatGPT and GenAI have also revealed to businesses the importance of getting a strong data fabric in place. Our research found that only 20% currently have a data fabric that supports GenAI very well – improving this will be vital to make effective use of the technology.”...TechRound, 5d ago
Artificial intelligence has revolutionized tourist decision-making by shifting the focus from price considerations to personalized alternatives. Tourists can now choose destinations, places, and activities that best suit their preferences, thanks to AI’s implementation of personalization techniques and recommender systems. These systems leverage the vast quantity of information available on the internet, including User-Generated Content (UGC), to provide more tailored and informed experiences. Travel assistants that leverage advancements in artificial intelligence, mobile devices, natural language processing, and speech recognition have become increasingly popular. These applications are designed to cater to user preferences, interests, and availability, offering on-demand or autonomous suggestions that proactively anticipate their needsvand they enhance the travel experience through personalized and intuitive assistance. These systems leverage the vast quantity of information available on the internet, including User-Generated Content (UGC), to provide more tailored and informed experiences. ServiceNow leverages generative AI to provide relevant, direct and conversational responses, seamlessly connecting interactions to digital workflows across the Now Platform. For example, when users inquire through Now Assist for Virtual Agent, generative AI quickly provides concise answers, supplying information such as internal codes for product and engineering teams, product media, document links, or relevant knowledge base article summaries. This ensures accurate conversations across departments and systems, improving productivity, boosting self-solve rates, and expediting issue resolution within ServiceNow. In today’s technology-driven era, the increasing AI footprint in the hospitality industry is a positive development.DATAQUEST, 5d ago
BUOLAMWINI: I'm concerned with the way in which AI systems can kill us slowly already. I'm also concerned with things like lethal, autonomous weapons, as well. So for me, you don't need to have superintelligent AI systems or advanced robotics to have a real harm. A self-driving car that doesn't see you on the road can be fatal and harmful. I think of this notion of structural violence where we think of acute violence - there's the gun, the bullet, the bomb, we see that type of violence. But what's the violence of not having access to adequate health care? What's the violence of not having housing in an environment free of pollution? And so when I think about the ways in which AI systems are used to determine who has access to health care and insurance, who gets a particular organ, you know, in my mind, there are already - and in, also, the world, we see there are already many ways in which the integration of AI systems lead to real and immediate harms. We don't have to have superintelligent beings for that.NPR, 5d ago
Harpreet Gulati, Senior Vice President, and Head of PI System Business at AVEVA commented, “Harnessing the potential of green hydrogen could avoid up to 80 gigatons of cumulative CO² emissions by 2050, contributing to as much as 20% of total abatement required to drive the net-zero economy. The hydrogen sector will require a new transportation, distribution, and regulatory approach to operate successfully as an alternative fuel. Combining this with the latest digital twin and AI-enhanced capabilities, industries can discover new paths to drive efficiency and decarbonize.”...Chemical Industry Digest, 5d ago
In Europe, the EU’s AI Act recognizes that AI systems will likely have a high energy consumption during their lifecycle. The legislation categorizes AI systems, setting out requirements for so-called “high-risk AI systems”. They must be designed and developed with logging capabilities that can record energy consumption, the measurement or calculation of resource use, and the environmental impact throughout the system’s lifecycle. At present, there are no regulations in place to reduce the energy consumption of AI technologies in the EU, rather, the European Parliament focuses on transparency and gaining a better understanding of the energy use of the advanced technology.OilPrice.com, 5d ago
Ultimately, the existing data do not favor either urgent or traditional start. The significant variability in the size of pediatric patients, and therefore in the flow dynamics, limit the ability to make an evidence-based recommendation. The panel judged that long-term outcomes such as hernia occurrence are not as critical in patients who need to initiate dialysis urgently. Patients and families will likely weigh short-term outcomes, particularly the rate of catheter dysfunction in the first 3 months, more heavily in decision-making. Although a higher leakage rate is seen with urgent start PD, there were lower rates of peritonitis and exit-site infection. This may be due to exit-site care protocols and other measures taken to maintain sterility, leading to the possibility of dialysate leakage without peritonitis or infection [57]. Other interventions such as application of fibrin glue, utilization of purse string sutures, or creation of longer tunnels may help mitigate dialysate leakage.SAGES, 5d ago

Latest

...“The 2024 Threat Predictions Report from FortiGuard underscores the imminent escalation of advanced cyber threats driven by the proliferation of Cybercrime-as-a-Service and the impact of generative AI. With threat actors now equipped with advanced tools, employing stealthier techniques and diversifying their targets, a unified response from the cybersecurity community is imperative. Considering these revelations, the guidance is clear: organizations should actively cultivate a culture of cyber resilience and bridge the skills gap to strengthen their defenses against the rising sophistication of cyber adversaries. The report serves as a roadmap for navigating these evolving threats and provides actionable insights to empower organizations in securing the digital landscape.”...CRN - India, 5d ago
Foundation models — of which LLMs are just one type — are at the core of generative AI (see Figure 2). When fine-tuned — in other words, trained on additional data related to a given topic — these pre-trained general-purpose models can serve specific applications. For example, FinBERT is a BERT model pre-trained on financial texts, such as financial statements, earnings call transcripts, and analyst reports, to improve the accuracy of its finance-related analysis and output.spglobal.com, 5d ago
Harpreet Gulati, Senior Vice President, and Head of PI System Business at AVEVA, said: “Harnessing the potential of green hydrogen could avoid up to 80 gigatons of cumulative CO² emissions by 2050, contributing to as much as 20% of total abatement required to drive the net-zero economy. The hydrogen sector will require a new transportation, distribution, and regulatory approach to operate successfully as an alternative fuel. Combining this with the latest digital twin and AI-enhanced capabilities, industries can discover new paths to drive efficiency and decarbonise.”...Industrial Automation Magazine, 5d ago
...“There are also a number of manufacturers that have never really deployed vision looking at AI and digitization as a way to solve an error-prone process and gather data to guide an automation strategy,” adds Goffin.assemblymag.com, 5d ago
In the 1990s and 2000s, with the advent of “big data” and thanks to advancements in computational power, significant breakthroughs occurred in machine learning and AI, especially in neural networks. New processes of machine learning, known as “deep learning,” (because the architecture was deeper and more complex, containing several hidden layers between the input and the output) would enable data processing on a deeper, more accurate, and more flexible level. Benchmarks that had been frozen for decades improved dramatically across almost all the classic applications, such as machine translation in natural language processes and image classification in computer vision.spglobal.com, 5d ago
Their multifaceted approaches encompass early detection capabilities, countering social media manipulation through advanced machine learning algorithms, and stringent cybersecurity measures. These defenders perform a crucial function in identifying and thwarting potential threats in modern campaigns and contribute significantly to minimizing the impact of false narratives on public sentiment. Moreover, it is essential to couple the AI-based detection systems with initiatives to raise public awareness and establish robust legal frameworks against challenges like deepfakes.unite.ai, 5d ago

Latest

Within the realm of collective behaviour, at least two distinct approaches have been used to model the mapping between sensory inputs and behavioural outputs. Classical models in collective behaviour posit a set of simplified behavioural rules (e.g. attraction, alignment, and repulsion) and seek to use those rules to explain behavioural patterns (Couzin et al., 2002). Bringing sensory ecology into this approach could inform how these hypothesised rule-sets should be constructed by highlighting which pieces of sensory information that are likely to be perceived and relevant to any given organism (Kranstauber et al., 2020; Strandburg-Peshkin et al., 2013; Witkowski and Ikegami, 2016). A complementary approach, increasingly used in recent years, is that of machine learning to establish the input–output relationship between high-dimensional sensory inputs and lower-dimensional behavioural outputs, that is, dimensionality reduction (Graving et al., 2019; Graving and Couzin, 2020; Valletta et al., 2017). Within this approach, an understanding of the organism’s sensory capacities would provide a more realistic set of features reflecting the actual perception of the organism, from which the input–output relationship can then be more effectively learned (see Tuia et al., 2022). An additional possible use of this information would be to exclude it from the machine learning algorithm and ask whether the learned relationship uses realistic features, thus shedding light on whether these methods make biological sense.eLife, 5d ago
While applications that are more explicitly malicious should be blocked, when it comes to access control, oftentimes the responsibility of the use of applications like ChatGPT should be given to the users, tolerating and not necessarily stopping activities that may make sense to a subset of business groups or to the majority of them. At the same time security teams have the responsibility to make employees aware of applications and activities that are deemed risky. This can mainly be accomplished through real-time alerts and automated coaching workflows, involving the user in the access decisions after acknowledging the risk. Netskope provides flexible security options to control access to generative AI-based SaaS applications like ChatGPT and to automatically protect sensitive data.gbiimpact.com, 5d ago
Cybersecurity has traditionally been a “go-to” use case for one of the fundamental technologies behind AI — machine learning. Machine learning generally uses supervised or unsupervised computational algorithms to solve problems in one of four distinct groups: classification, clustering, dimensionality reduction, and prediction/regression problems. In cybersecurity, all four problem types are addressed in processes such as identifying normal versus irregular patterns for data and application access controls, isolating potential threat actors from normal users based on automated behavior analysis, refining criteria used in monitoring normal business operations versus irregularities that may indicate threat actors and exploitations, and recognizing new or evolving methods of attack such as malware variants. For decades, machine learning has been part of the toolkit used in cyber threat management, but recent advancements in technology — particularly the transformer, which puts the “T” in ChatGPT — are poised to transform the cybersecurity industry.spglobal.com, 5d ago

Top

Addressing Explainability Challenges: The task of explaining the decision-making processes of LLMs presents notable challenges. Traditional methods of explainability often fall short in addressing the intricate nature of these models. However, emerging strategies and continued research are showing promise in enhancing the explainability and reliability of LLMs, paving the way for more intuitive and transparent AI systems.Fiddler AI, 13d ago
AI has exciting potential to revolutionise the world and bring vast benefits to sectors such as healthcare including diagnosis of diseases and new (de novo) drug development with greater efficiencies. However, there are also challenges with data privacy as AI models rely upon large sets of data to train and develop models for commercial usage, and issues such as imbalance in datasets resulting in bias that may in turn result in unfair and unreasonable outcomes.bbntimes.com, 6d ago
Beyond the realm of privacy and security, the integration of AI in decision-making processes also introduces ethical considerations. AI-induced biases and fairness concerns have become central to discussions surrounding AI adoption. Organizations must grapple with the ethical implications of their AI-driven choices, emphasizing fairness, transparency, and accountability. When AI algorithms inadvertently perpetuate biases present in their training data, trust erosion becomes a genuine risk, potentially damaging an organization’s reputation and relationships with customers and stakeholders.DATAVERSITY, 12d ago
Fears of a “Terminator scenario” were not shared by all conference participants, with an unsurprising amount of overlap with those that are presently developing large “frontier” AI systems. Meta’s Nick Clegg, president of global affairs for the company, urged participants to focus on immediate and everyday AI safety risks like bias in decision-making systems. Another luminary that was present at the conference, Turing Award-winning researcher Yoshua Bengio, was asked to head up a body tasked with producing a report on the risks and possibilities of frontier AI systems.CPO Magazine, 26d ago
The first are technological solutions that can be applied directly to their artworks – e.g. by using tools that change the pixels of an image in a way the human eye cannot detect, such that it will “cloak” the image to prevent AI models from copying its style,24https://glaze.cs.uchicago.edu/faq.html or “trick” the AI model into thinking the image is something other than what it actually is (e.g. recognising an image of a car as a cow).25https://www.technologyreview.com/2023/10/24/1082247/this-new-tool-could-give-artists-an-edge-over-ai/ These may be more effective than technological solutions that are detached from the artwork (e.g. HTML tags,26DeviantArt created a new form of protection, with “noai” (AI cannot use anything on the page) and “noimageai” (AI cannot use any images on the page) directives. These “noai” and “noimageai” meta tags will be placed in the HTML page associated with the art. Web crawlers are able to read the tags and recognise that the person does not want their content used to train AI. However, the web crawler can still choose to ignore it. The above explanation was based on this article which reports on DeviantArt’s new protection: https://techcrunch.com/2022/11/11/deviantart-provides-a-way-for-artists-to-opt-out-of-ai-art-generators/ , and this article which very clearly explains how it works: https://www.aimeecozza.com/noai-noimageai-meta-tag-how-to-install/ or simply Terms of Use pages on the sites where the content is hosted). However, the ethics of using such data poisoning tools is being debated, as the use cases vary from deterring companies from using copyrighted works without permission (“use at own peril!!”), to “actively trying to ruin a model”.27See the interview with Braden Hancock, co-founder of Snorkel AI that develops LLMs, available at https://www.computerworld.com/article/3709609/data-poisoning-anti-ai-theft-tools-emerge-but-are-they-ethical.html. Braden is of the view that “there are unethical uses of (technological defences) – for example, if you’re trying to poison self-driving car data that helps them recognize stop signs and speed limit signs (…) if your goal is more towards ‘don’t scrape me’ and not actively trying to ruin a model, I think that’s where the line is for me.”...The Singapore Law Gazette, 10d ago
Third, Congress should continue to invest in research for privacy- and security-enhancing technologies, as these will have important uses for AI. Additional research on topics such as secure multiparty computation, homomorphic encryption, differential privacy, federated learning, zero-trust architecture, and synthetic data can minimize or eliminate the need for AI-enabled services to process personal data while still maintaining the benefits of those services.[18] Many developers are already exploring solutions to address privacy concerns associated with large language models (LLMs). For example, some developers are exploring the use of “data privacy vaults” to isolate and protect sensitive data.[19] In this scenario, any PII would be replaced with deidentified data so that the LLM would not have access to any sensitive data, preventing data leaks during training and inference and ensuring only authorized users could access the PII.itif.org, 26d ago

Latest

While this study focuses only on neuromodulation techniques, we want to highlight that the proposed approach can be applied to other forms of treatment (e.g., pharmacological studies, cognitive training) tested as part of standard experiments or RCTs. It is worth noting that the contribution of subjective beliefs to experimental results might be even more enhanced when considering interventions carried out in seemly cutting-edge research settings, such as experiments involving virtual reality, neurofeedback paradigms, and other types of brain-computer interfaces. In such cases, participants might be more susceptible to forming specific expectations about treatment effects (Burke et al., 2019; Thibault et al., 2017). Therefore, the explanatory power of subjective beliefs could be intensified compared to more traditional forms of treatment, such as pharmacology.elifesciences.org, 5d ago
Another option might be that the panoply of existing public regulators of equality, finance, telecoms and competition might have their remits expanded. The Equality and Human Rights Commission could be resourced on an ongoing basis to combat racial and other biases in AI. The current UK system of the Information Commissioner and Information Tribunal needs to be adapted to meet the specific challenges of AI. Police and other agencies will also need to be empowered and resourced to investigate criminality.Local Government Lawyer, 5d ago
Once the AI system has mapped the input text into tokens, it encodes the tokens into numbers and converts the sequences (even up to multiple paragraphs) that it processed as vectors of numbers that we call “word embeddings.” These are vector-space representations of the tokens that preserve their original natural language representation that was given as text. It is important to understand the role of word embeddings when it comes to copyright because the embeddings are the representations (or encodings) of entire sentences, paragraphs, and even documents, in a high-dimensional vector space. It is through the embeddings that the AI system captures and stores the meaning and the relationships of the words from the natural language.The Scholarly Kitchen, 6d ago
Harpreet Gulati, Senior Vice President, and Head of PI System Business at AVEVA, said: “Harnessing the potential of green hydrogen could avoid up to 80 gigatons of cumulative CO² emissions by 2050, contributing to as much as 20% of total abatement required to drive the net-zero economy. The hydrogen sector will require a new transportation, distribution, and regulatory approach to operate successfully as an alternative fuel. Combining this with the latest digital twin and AI-enhanced capabilities, industries can discover new paths to drive efficiency and decarbonize”.TahawulTech.com, 6d ago
The guidelines apply to all types of AI systems, not just frontier models. It also provides suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems.Industrial Cyber, 6d ago
When we look at identifying potential botnet DDoS attacks, that is often the easy part. The more difficult challenge is how to address the attack in a granular manner without creating a traffic bottleneck or black holes. How do you distinguish traffic that is originating from hundreds, potentially thousands, of compromised IoT devices, and valid traffic. How do you limit or even stop this compromised traffic without impacting the service experience of valid users. While there maybe a drive to introduce more intelligence on the IoT devices themselves and into their supply chain, how does the IP network defend itself. It requires an ability to rapidly set up and tear down hundreds of thousands of IP filters in the network – all without impacting the performance of the IP network.Cyber Defense Magazine, 6d ago

Top

Generative AI promises to eliminate the drudgery of many basic tasks so that teams can focus on driving results with an all-knowing, super intelligent AI co-pilot at their side. It’s already transforming the way many departments within companies work, from the way developers write new code to how marketing drafts press releases. Tools like Gone Engage and People.ai have been billed as generative AI-powered sales platforms designed to capture all interactions between sales teams and customers to provide insights grounded in context, fueling better business decisions. No more manually searching call logs or customer email history. The good news for businesses? Today’s workforce has an appetite for this kind of modern technology that’ll transform business. In fact, according to recent surveys, 70% of Gen Z employees say they would leave their current job if it meant they’d have access to better technology, like AI. These generations feel passionately about tools that can enhance their job, not hinder it. In response, 91% of workplace decision makers agree that they will need to provide more advanced digital experiences to meet the demands of younger generations. Failing to meet the technology needs of roughly half of today’s workforce would mean losing top talent — a scenario many industries cannot afford in today’s tightened economic environment.Training Industry, 17d ago
At its core, AI is about data. And the largest immediate issue here is data fragmentation. UK companies identified challenges in terms of data readiness, as 86% of respondents admitted that their data exists in silos across their business.  This presents considerable risks for data and AI management and limits the ability to fully leverage AI technologies. To gain the maximise benefits of AI, it’s essential to be able to integrate data from many sources, seamlessly, and securely.techuk.org, 7d ago
However, the WRC is not just about challenges. It also presents a significant opportunity for the UK to shape the global agenda for wireless communication and AI systems. Active engagement in the discussions at WRC allows the UK to influence international decisions that can benefit its thriving technology sector. By advocating for policies that facilitate AI innovation and ensure fair access to wireless spectrum, the UK can foster an environment conducive to technological advancement.techuk.org, 11d ago