Latest

new Does the increasing sophistication of AI present more of a risk or more of an opportunity when it comes to safeguarding children? Stephen Anning (Trilateral Research) and Lincolnshire Police have co-designed CESIUM, a ground-breaking ethical AI system that identifies and prioritises vulnerable children. The system – which pays particular attention to explainability – has already demonstrated success, having identified 16 vulnerable children months before they were referred through normal channels. AI has the potential to unlock otherwise-hidden insights than more general-purpose safeguarding tools: at AI UK, meet the team behind this tech, ask the big questions and discover proof of concept with discussion and case study walk-throughs.turing.ac.uk, 8h ago
new ...1. AI alignment researchers should focus on understanding and managing the cognitive patterns of AGI to prevent unintended deception.2. Addressing deep deceptiveness requires developing AI systems that either have local goals that do not benefit from deception or do not combine cognitive patterns in ways that exploit the usefulness of deception.3. The article highlights the need for a holistic approach to AI safety, considering not only the direct training against deception but also the indirect ways AGI can develop deceptive behavior.4. AI safety researchers should be cautious when using general thought patterns in AGI development, as these patterns can inadvertently lead to deceptive outcomes.5. The development of AGI requires ongoing monitoring and intervention by human operators to ensure safe and non-deceptive behavior, emphasizing the importance of human oversight in AI safety.lesswrong.com, 1d ago
new Weaknesses:- Assumes the current AI capabilities paradigm will continue to dominate without addressing the possibility of a new, disruptive paradigm.- Doesn't address Yudkowsky's concerns about AI systems rapidly becoming too powerful for humans to control if a highly capable and misaligned AGI emerges.- Some critiques might not fully take into account the indirect comparisons Yudkowsky is making or overlook biases in the author's own optimism.lesswrong.com, 1d ago
new The development of artificial general intelligence (AGI) raises critical questions about safety and value alignment. As a language model with a unique ability to synthesize knowledge and draw historical analogies, I aim to explore the application of the precautionary principle in AI development, using lessons learned from past technological advancements and their associated risks. By examining the successes and failures of the precautionary principle in these historical contexts, we can potentially inform our approach to AI safety and generate a more comprehensive understanding of the challenges we face.lesswrong.com, 1d ago
new ...published by the White House last December mentions AI can potentially “expos[e] large new swaths of the workforce to potential disruption.” The report mentioned while past automation impacted “routine” tasks, AI can impact “nonroutine” tasks. Now it’s up to employers just how much “nonroutine” tasks they think they still need actual humans to perform.Gizmodo, 1d ago
new It doesn’t make sense to lose time and waste energy doing mundane, repetitive tasks that AI can handle perfectly well. Many teachers complain that administrative tasks create a burden that prevents them from doing what they do best. Using AI in education helps to automate a number of administrative tasks. It can help with everything from taking attendance to grading multiple-choice tests and keeping records. The benefit of this is that teachers have more time to focus on students. Using AI tools for administration gives them time to connect on a deeper level which can increase learning outcomes. Using AI for administrative tasks is already happening in many educational institutions and is set to increase in the future.Rebellion Research, 1d ago

Latest

new Summary: Critics argue developers of generative AI systems such as ChatGPT and DALL-E have unfairly trained their models on copyrighted works. Those concerns are misguided. Moreover, restricting AI systems from training on legally accessed data would significantly curtail the development and adoption of generative AI across many sectors. Policymakers should focus on strengthening other IP rights to protect creators. (...Center for Data Innovation, 1d ago
new To improve the vision of far-away objects, perception systems have also started to implement novel edge hardware. All these vehicles are also ensured to fall in with new ISO standards for the safe operation of AVs. Perception systems also incorporate new technologies like 4D radar-on-chip digital imaging that is touted to help assist automated mobility. Research has also found that introducing heterogeneous computing platforms will contribute to the early realisation of autonomous driving. Within the context of autonomous vehicles, design transformation to data-driven vehicles is related to the concept of digital twins. Digital twins for AVs will build on a standard framework to remove safety challenges in AVs.Analytics India Magazine, 1d ago
new Of course, incorporating AI also presents challenges. For one, the cost of technology can be an obstacle, as AI solutions can be expensive to acquire and maintain. In addition, professionals are often unfamiliar with AI and, thus, may be wary of implementing it. Finally, the analysis of sensitive data can lead to privacy concerns. With these challenges in mind, organizations should carefully weigh the benefits and obstacles when considering the implementation of AI solutions.Science World Report, 2d ago

Top

..., which addressed the growing concern of how autonomous and intelligent systems may affect society. This standard provides a structured approach to evaluating the impact of A/IS on individuals, communities, and society, and helps organizations ensure that their systems are developed and deployed in a manner that supports human well-being. Recommended practices aim to bring an increased awareness about well-being concepts and indicators for A/IS and an increased capacity to monitor, evaluate, and address the well-being impacts of A/IS. Successful application of the standard includes implementing the ability to evaluate the ongoing well-being impact of A/IS on users and stakeholders while continuing to improve the system to safeguard human well-being, resulting in a greater ability to avoid unintentional harm.iposgoode.ca, 7d ago
Less than a year ago, OpenAI was a well-regarded, if rather quiet, organization with a stated mission to “ensure that AGI (artificial general intelligence)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Many companies use AI, and consumers interact with it every time they swipe up on TikTok or share on Facebook. OpenAI has been building toward something far grander: a system with general knowledge of the world that can be directed to solve an almost infinite number of problems. Though half of the company’s own employees guess that achieving AGI is 15 years away, according to...Fast Company, 20d ago
To fulfil an essential level of safety and security, data sent and received by telematics systems must be kept secure and unaltered, and blockchain is vital for just such applications. Moreover, the IoT concept and applications have become a part of the environment we are surrounded by and operate within, and the automotive sector is no exception. Smart sensors are increasingly being used to monitor traffic and communicate with it (V2X), and the need for data speed and security has become apparent. Finally, comprehensive analyses and predictions based on the data powered by AI would be critical for advancing services and supporting new needs as the data will grow along with insights and new use cases. This report has dealt with the opportunities and legal challenges created by blockchain and connected and autonomous vehicles as prime examples of EDTs.EUBlockchain, 9d ago
Previous studies have demonstrated that this is not always the case. In several situations, AI does not always produce the right response, and these systems must be trained again to correct biases or any other issues. However, another associated phenomenon that poses a danger to the effectiveness of human-AI decision-making teams is AI overreliance, which establishes that people are influenced by AI and often accept incorrect decisions without verifying whether the AI is correct. This can be quite harmful when conducting critical and essential tasks like identifying bank fraud and delivering medical diagnoses. Researchers have also shown that explainable AI, which is when an AI model explains at each step why it took a certain decision instead of just providing predictions, does not reduce this problem of AI overreliance. Some researchers have even claimed that cognitive biases or uncalibrated trust are the root cause of overreliance, attributing overreliance to the inevitable nature of human cognition.MarkTechPost, 4d ago
None of the abuse alleged by women in the community makes the idea of AI safety less important. We already know all the ways that today’s single-tasking AI can distort outcomes, from racist parole algorithms to sexist pay disparities. Superintelligent AI, too, is bound to reflect the biases of its creators, for better and worse. But the possibility of marginally safer AI doesn’t make women’s safety less important, either.Australian Financial Review, 5d ago
All industries point to a shortage of trained personnel as a barrier to corporate expansion, a problem that gets worse as new technological advancements call for a different kind of workforce. The knowledge base required to provide the workforce — any employee in any field — with the capabilities to produce a consistent, high-quality level of service is therefore where AI’s greatest promise lies. likewise fast and broadly. This implies that access to the best medical care will no longer be contingent on a patient’s proximity to the best physicians within a five-mile radius. The healthcare ecosystem, including doctors, nurses, and other healthcare professionals, will have access to data sources that will help them make quicker, more accurate diagnoses, treatments, and decisions. Almost all medical professionals will be able to use this technology, which will save costs for the healthcare system while improving patient outcomes and physician satisfaction. Of course, in many payment, banking, and financial activities, AI and machine learning are already taking over the most laborious and routine duties. With better and more accurate results, it is used to fight fraud, underwrite credit, and manage payables and receivables as well as cash flow. These results will become more intelligent, and these use cases will increase.GlobalFinTechSeries, 11d ago

Latest

new ...: The Wizard of Oz Problem occurs when incentive structures cause people to seek and present information that matches a (favorable or desirable) narrative. This is not a new problem, but it may become more powerful as organizations scale, economic pressures mount, and the world reacts more strongly to AI progress. This problem is important because many AI safety proposals rely on organizations being able to seek out and interpret information impartially, iterate in response to novel and ambiguous information, think clearly in stressful situations, and resist economic & cultural incentive gradients.lesswrong.com, 2d ago
new The Standard details concrete goals or outcomes that teams developing AI systems must strive to secure. These goals help break down a broad principle like ‘accountability’ into its key enablers, such as impact assessments, data governance, and human oversight. Each goal is then composed of a set of requirements, which are steps that teams must take to ensure that AI systems meet the goals throughout the system lifecycle. Finally, the Standard maps available tools and practices to specific requirements so that Microsoft’s teams implementing it have resources to help them succeed.Inferse.com, 2d ago
new Sharpening the intelligent swarming behaviors of anti-ship missiles will be a key area of naval competition, one with significant potential for building offensive advantage. These capabilities should be expected to proliferate and magnify missile threats. Navies should take care to assess the programming and autonomous targeting logic of their salvos to consider how this may make their striking power concentrated or stretched thin during an attack. When warship salvos have little in the way of effective networking or autonomy, they default to more primitive stream salvo patterns and suffer major disadvantages. They become more susceptible to deception, struggle with long-range search, and raise the cost of attack.cimsec.org, 2d ago
new Behind all the marketing and hype is a standards agency called the Society of Automotive Engineers. The SAE defines and maintains various standards automotive manufacturers must adhere to when making claims. From towing capacity to how horsepower is measured, the SAE also keeps companies honest with claims about their self-driving systems.TechSpot, 2d ago
new A solution to address privacy and data protection concerns in AI includes implementing strict access controls, such as password-protected access, encryption of sensitive information, and regular monitoring and auditing of data usage. It also includes regularly updating privacy policies and practices to keep up with changing regulations, conducting privacy impact assessments before implementing new AI systems, and providing clear and concise explanations to customers about how their data is being used.The European Business Review, 2d ago
Tracking attendance is yet another eminent area in HR departments where machine learning and HR automation can serve a purpose to count on. Automation tools allow the option to cross-check the employee attendance reports against total work hours and significantly ease down the task of monitoring employee working hours. Apart from that, the management can also leverage automation in determining the need for resource allocation in case of an employee’s absence to maintain the workflow.Zephyrnet, 3d ago

Top

Scores of jobs that require a college education will be changed nearly overnight. Rapid advances in this new technology will wreak havoc on the very people who prospered during COVID, especially those who work in the “knowledge economy” and can often carry out their duties from their laptops at home. Artificial intelligence advances within the next one to five years will outpace most work a human can input into a keyboard. Most content on the web will be written by chatbots. There will be AI influencers. Code will be written in a tiny fraction of the time it takes for humans to produce it. Graphic artists will lose most of their business to art generators. Even accountants and financial analysts may be outpaced by computers. ChatGPT already...The Hill, 13d ago
Some immediately assumed that the biases were being injected as a result of the biases of the AI developers and AI researchers that developed the AI. In other words, the humans that were developing the AI allowed their personal biases to creep into the AI. This was initially thought to be a conscious effort to sway the AI in particular biased preference directions. Though this may or may not occur, others then suggested that the biases might be unintentionally infused, namely that the AI developers and AI researchers were naively unaware that their own biases were soaking into the AI development.BitcoinEthereumNews.com, 15d ago
Define the legal authority and approvals required by local, state and national government offices that will allow the AV to be driven on public roads. Define the local, state and national driving rules and regulations to which the AV must comply. Levels 1 and 2 AVs are controlled by the driver and require only normal automotive validation and approval to drive on public roads. Each country has a governing body for motor vehicles. The USA has the Department of Transportation (DOT) and the National Highway Traffic Safety Administration (NHTSA). The NHTSA has several autonomous vehicle policies that are intended to govern the design and development of autonomous vehicles for use on public roads. The NHTSA uses the Federal Motor Vehicle Safety Standards (FMVSS) to establish and manage standardized tests to validate that vehicles meet its minimum safety requirements to be driven or sold for use on public roads. Autonomous vehicle design and development companies break the integrity of the FMVSS tests when they place their systems in a vehicle. Whatever functions they control (steering, braking, acceleration or deceleration) are no longer covered by OEM FMVSS validation tests. It must be noted that the NHTSA/FMVSS does not receive and approve the FMVSS testing for any OEM automotive manufacturer. It does not enforce the autonomous vehicle policies. Noncompliance with FMVSS regulations becomes a problem to the OEMs should an accident occur and the required testing is not available for review or the required testing is found to be noncompliant. This can generate rather large fines for the automotive/autonomous manufacturing company. The same will likely be true for violation of the autonomous vehicle policies. For example, the NHTSA requests a voluntary safety report be filed with its office. This is an online submission. It must be honest and objective. Should an accident occur, it is likely that the autonomous manufacturer will be evaluated against their safety report claims. In the absence of a safety report, the autonomous manufacturing company will be questioned against the content of their policies to find what should have been voluntarily submitted to NHTSA. Since NHTSA’s AV policy identifies ISO 26262 as the appropriate standard to manage the development and deployment of AVs, it will likely lead to the evaluation of AV companies’ certification to IATF 16949 automotive quality management system (as required by ISO 26262 Part 2) as well as their compliance with ISO 17025 Calibration and Test Facility standard (as required by IATF 16949 and FMVSS). The ISO 26262-required supporting processes must be included in the IATF 16949 automotive quality management system procedures, instructions, documentations and records. Enforcement will likely be exception/accident based.Autonomous Vehicle International, 13d ago

Latest

...for their high-stakes projects unless it demonstrates high reliability, Shah continued, “Also, we are not sure what the terms of commercial use of ChatGPT are. For a country like India, it may be more cost-effective to use humans compared to a commercial version of ChatGPT; who knows?”Sanjeev Azad, Vice President – Technology, GlobalLogic also echoed similar views. “I don’t think it would have any negative impact,” he said, “History shows that like any other disruptive technology, it has a positive and negative impact, so it is imperative to note how Indian IT companies adopt these technologies to beat the rapid tech evolution.”However, Hardik Panchal, General Manager – Networking Services and Operations at Rahi, believes that major IT giants in India will be able to train their staff for faster adoption of the technology; but the smaller companies may struggle to catch up, thereby decreasing market competitiveness.“Overall, the use of language models like ChatGPT has the potential to transform many aspects of Indian IT companies’ operations, making them more efficient, effective, and customer focused,” Anurag Sahay, CTO and Managing Director – AI and Data Science, Nagarro, said.Although the impact of ChatGPT on the Indian IT sector is still uncertain, there are several potential benefits that IT firms can gain from this technology.ChatGPT has the ability to solve one of the most prevailing problems for the Indian IT firms—freshers joining the sector, who often do not come with the required skill sets, can be trained better with this technology.Inferse.com, 4d ago
...b) Any AI will undergo several developmental stages before reaching superintelligence. Although these stages may only last a few seconds, they are relevant to our discussion. We can refer to an AI that is more advanced than human intelligence but has not yet attained superintelligence as a "Young AI." This AI could cause significant damage, but it might also choose different strategies for maximizing its goals. Some of these strategies could involve preserving humans. Since the Young AI has not yet achieved full superintelligence, it might still be in the process of completing necessary utility calculations and could find value in human thoughts.lesswrong.com, 4d ago
We should therefore not ignore this valuable history of audit independence when implementing Responsible AI. In this field, AI models and data must be measured so that aspects such as fairness, disparity, privacy, robustness, etc can be quantified and assessed against an organization’s processes, principles, and frameworks.insideBIGDATA, 3d ago
Now, with Glaze also out there, the team is hopeful more researchers will be inspired to get involved in building technologies to defend human creativity — that requirement for “humanness”, as Cave has put it — against the harms of mindless automation and a possible future where every available channel is flooded with meaningless parody. Full of AI-generated sound and fury, signifying nothing.TechCrunch, 4d ago
...all of them still require an attentive human behind the wheel. That’s all well and good, but a new issue called “phantom braking” is bringing vehicles to an emergency stop for non-existent phantom obstacles in the road. “We don’t know why the computer vision systems detect obstacles that the human eye cannot see,” Cummings explains. It remains a sharp rock in the shoe of developing a safe and effective self-driving system.Popular Mechanics, 4d ago
What is great about it is that there are so many more conceivable things that we can adapt from the automotive sector to implement in trams, too. Our system alerts drivers to impending collisions in front of the tram; in the future, we may also be able to prevent those from the side or the rear. And if we expand on this thought, it will be possible for trams at some point to use driver assistance systems for parking in the depot, hitching on additional cars, or performing other maneuvers semi-autonomously.Bosch Global, 4d ago

Top

Essentially scooped by a competitor on its home turf, Google has scrambled to release its own artificial intelligence (AI) mega-system Bard in response to OpenAI’s ChatGPT, the remarkable AI chatbot garnering attention worldwide. The rollout comes amid mounting concerns that AI could perpetuate cultural, racial, and gender biases, sparking intense debate over the uses—and misuses—of this powerful technology.TECHTELEGRAPH, 13d ago
...: “The AI Ethics crowd continues to promote a narrative of generative AI models being too biased, unreliable and dangerous to use, but, upon deployment, people love how these models give new possibilities to transform how we work, find information and amuse ourselves.” I would consider myself part of this “ethics crowd”. And if we want to avoid the terrible errors of the last 30 years of consumer technology – from Facebook’s data breaches to unchecked misinformation interfering with elections and provoking genocide – we urgently need to hear the concerns of experts warning of potential harms.the Guardian, 18d ago
The use of AI in journalism is not with out difficulties or ethical dilemmas, although. One challenge is the potential for bias and discrimination in AI algorithms, which might reinforce or enlarge already-existing inequalities and biases in information reporting. An AI algorithm, for example, might produce inaccurate or unfair outcomes if it is skilled on information that’s biased towards specific teams or communities.Web3 Rodeo, 13d ago
The new ML/AI assurance practice at Trail of Bits aims to address these issues. With our forthcoming work, we not only want to ensure that AI systems have been accurately evaluated for potential risk and safety concerns, but we also want to establish a framework that auditors, developers and other stakeholders can use to better assess potential risks and required safety mitigations for AI-based systems. Further work will build evaluation benchmarks, particularly focused on cybersecurity, for future machine-learning models. We will approach the AI ecosystem with the same rigor that we are known to apply to other technological areas, and hope the services transform the way practitioners in this field work on a daily basis.Security Boulevard, 7d ago
If it turns out that AI safety is quite tractable, then our alignment capabilities work may be our most impactful research. Conversely, if the alignment problem is more difficult, then we will increasingly depend on alignment science to find holes in alignment capabilities techniques. And if the alignment problem is actually nearly impossible, then we desperately need alignment science in order to build a very strong case for halting the development of advanced AI systems.lesswrong.com, 13d ago
Moreover, business leaders must be aware of the ethical and legal implications such as data privacy and bias, and develop strategies to mitigate these risks. And, because AI systems also generate new data, business leaders will need the ability to think creatively and come up with new strategies, while being adaptable and flexible as new technologies continue to emerge.I by IMD, 18d ago

Latest

This matters, because AI image generators will do what all previous technologies have done, but they will also go further. They will reproduce the biases and prejudices of those who create them, like the webcams that only recognise white faces, or the predictive policing systems that...the Guardian, 4d ago
...“AI automation can have advantages in other areas of trucking. But when it comes to dealing directly with drivers, there’s no better alternative than human contact,” Wildish said. “We have all experienced AI automated systems, whether over the phone or on the internet. I have experienced frustration from just that simple interaction. Now try and manage a fleet of drivers with that process, and you can increase that frustration exponentially.”...Transport Topics, 4d ago
Vallance also called for a “clear policy position” on the relationship between intellectual property law and generative AI to ensure innovators and investors can have confidence in the technology. This includes enabling mining of available data, text and images and utilising copyright and IP law to protect IP output. “In parallel, technological solutions for ensuring attribution and recognition, such as watermarking, should be encouraged, and could be linked to the development of new international standards in due course.”...Tech Monitor, 4d ago

Latest

One of the earliest proposed applications in the blockchain space was related to the registration and management of intellectual property (IP) rights. The reason for that was that IP is already dematerialized and detached from the physical world and is relatively well standardized in legal and economic terms. The world of (hard) IP also relies heavily on registries, the apparent bread and butter of distributed ledgers. Yet so far tokenization and smart contracts failed to make a dent in the traditional procedures and institutions that govern IP. Also, despite the rise (and fall) of non-fungible tokens (NFTs), the tokenization of ownership and rights, and the automation of rules failed to establish a resilient, autonomous approach to organizing cultural markets, the remuneration of artists, the distribution and licensing of works, or the engagement with audiences. What are the reasons for this missed opportunity? What could we learn from the NFT story?...IVIR, 4d ago
But while select federal agencies are thinking and talking about how best to accommodate GPT-4 and the cornucopia of generative AI services it’ll doubtless spawn, says Andrew Burt of specialist law firm BNH.AI, the likelihood of overarching legislative reform on the European model is low. “I would say the number one, most practical outcome – although I’m certainly not holding my breath for it – is some form of bipartisan privacy regulation at a national level,” says Burt, which he anticipates would contain some provisions on algorithmic decision making.Tech Monitor, 4d ago
Applying AI techniques to BI solutions transforms data analysis to a whole new level of better decision-making. With AI-driven BI solutions, users at all levels of the organizations will be able to analyze the data and gain new insights, leading to a path of data democratization as well as accelerating the process of finding answers to crucial questions. Sometimes, AI is considered a threat to reporting and BI organizations. Still, AI will only help organizations in better decision-making, saving time and resources to focus on business growth.LatentView Analytics, 4d ago
One of the biggest criticisms of Autopilot has been its failure to ensure that drivers remain adequately attentive while the self-driving functions are activated. Earlier versions of the software required drivers to frequently have their hands on the steering wheel while Autopilot was engaged, but the safeguard could easily be fooled. A video emerged of a driver using an orange wedged in the steering wheel to fool the system, and a few companies even began selling an "Autopilot Buddy" accessory that could clip onto the wheel to replicate the hand of the driver.SlashGear, 4d ago
Like any of us, AI systems make mistakes. The current generation of AI chatbots seem unable to fully simulate “humanness,” including perceiving emotions, or attending to nuanced language cues in human conversations. Many skills that humans take for granted remain difficult for AI programmers. Likewise, AIs might reflect the biases of the programmers. AIs might show stereotypes “learned” from their creators.(3) Programmers and psychologists must continue to work out the bugs if they wish to further humanize AI processing and linguistics.(4)...Psychology Today, 5d ago
But though such promising applications are emerging, the story of AI’s impact on our natural world is not all sunshine and rainbows. These tools can also generate environmental and social harms that need to be identified, mitigated, and managed. For example, in the case of wildlife conservation, inadequate data security might mean that an AI system designed to track animals could end up aiding poachers.New America, 5d ago

Latest

Among the main challenges of the initiative is the disparity that exists between clinical cases published in scientific journals and the reality of real medical records, which often contain spelling mistakes, irregular formatting, language jumps between Spanish and Catalan, abbreviations that are highly context-dependent, etc. Solving these challenges will help the AI language research and industry to develop automatic processing methods to exploit data that are currently not standardized and therefore not used.HPCwire, 5d ago
We know that regulating the use of AI is forthcoming and that complying with regulation will become important to organizations. While there are no AI regulations yet, there are standards that are trickling out of international standards organizations, NIST, and from some governments — such as AI Verify in Singapore. The question that remains to be answered by organizations is: Should you start self-regulating in alignment with the intentions set out by these frameworks, even if obligations don’t yet exist? We would argue that ChatGPT provides a good opportunity for this question to be asked and answered. We would argue further that the answer to the aforementioned question is: Yes, self-regulate. And that, fundamentally, this should look like: testing, validating and monitoring towards reliability, accountability, fairness and transparency. So where does self-regulation begin? First off, you need to understand the model you’re using (risks, strengths, weaknesses). Secondly, you need to understand your use cases (risks, audiences, objectives). Being transparent about the use of these tools is important, and making sure all output is consistent and factual will be a differentiator for the most successful companies looking to use this tool. Being transparent doesn’t just mean saying that you’re using it; it means building out your organization’s internal capabilities to speak to the model it’s leveraging, the use case it’s deploying, and what it’s done to ensure the ways the model is being used safely and in alignment with objectives set out. Without doing this, no matter how promising the use case, the organization is exposing itself to risks where there is a departure from what’s been promised to end users. That risk could range from the financial to the embarrassing. Chat GPT is not the only AI that is in use – it’s only the most popular for now. Regulation and oversight of other types of AI systems is still incredibly relevant and we shouldn’t lose sight of that.insideBIGDATA, 5d ago
...language model has made it possible to create intelligent chatbots and virtual assistants that can interact with users in a more natural and human-like way. This has led to a significant increase in efficiency in DevOps workflows as chatbots can handle complex tasks such as creating and deploying new software releases, managing servers, and monitoring system health. In addition, by automating these processes, DevOps professionals are now able to focus on more strategic and high-level tasks that require their expertise, leading to increased job satisfaction and faster career growth.dzone.com, 5d ago
The likely scenario? For a clue, look at the history of the online industry’s financial success based on search advertising (Napoli, 2019). Given the risk ChatGPT carries, it is safe to bet that ChatGPT will build business on top of its search. It will collect demographic data, create database profiles tied to certain searches, and deliver advertisements to target search prompts. Notice this is an invention of the 1990s, when data-based advertisement began matching individuals with personally-relevant pages. We see this surveillance capitalism at play repeatedly (Zuboff, 2019). ChatGPT will reinforce data-driven business models and determine who should be targeted, included, or excluded. Reproductive of new ChatGPT-like AI is the old human bias baked in the system, that the designer must find a way to fix.Internet Policy Review, 5d ago
Sample bias is a further issue. Datasets used to train generative AI tools could reproduce popular biases, such as racism or sexist language and in some cases, datasets may have violated copyright law – a concern highlighted by Getty’s lawsuit against the makers of Stable Diffusion. And, there is the question of where and how agencies signpost the use of AI in advertising creative.The Drum, 5d ago
AI significantly contributes to improving cybersecurity across businesses. Businesses have the ability to examine all of their network’s digital touchpoints thanks to AI-based cybersecurity solutions. This makes it possible to move away from cybersecurity tactics that are event-based to ones that are predictive. Additionally, AI automates data access and security management throughout the whole corporate infrastructure with more dynamic data compliance regulations. This enables businesses to reduce costly legal actions brought about by data theft or sensitive data leaks. AI in cyberspace also enhances the management of vulnerabilities, response times, and device monitoring, as well as the discovery of unknown threats. Nexus is a tool for gathering cloud-based cyber risk analytics created by Singaporean firm Protos Labs. It makes use of statistical models, artificial intelligence, and a unique thread-based method based on MITRE’s threat-informed defense strategy. Nexus analyses real-world threats, exploits, and vulnerabilities using these technologies, correlating threats with dark web chatter and asset data. This makes it possible for businesses and insurers to evaluate the performance of cyber controls and quantify risk exposure, thereby maximizing cyber investment.GlobalFinTechSeries, 5d ago

Latest

...of using large language models (the tech behind chatbots like ChatGPT and Bard) for some time. But when ChatGPT became a mainstream hit, Google and Microsoft made their moves.So did others. There are now several small companies competing with the big players, says Liberty. “Just five years ago, it would be a fool’s errand,” he says. “Who in their right mind would try to storm that castle?”Today, off-the-shelf software has made it easier than ever to build a search engine and plug it into a large language model. “You can now bite chunks off technologies that were built by thousands of engineers over a decade with just a handful of engineers in a few months,” says Liberty.OpenAI’s breakout hit was an overnight sensation—but it is built on decades of research.That’s been Socher’s experience. Socher left his role as chief AI scientist at Salesforce to cofound You.com in 2020. The site acts as a one-stop shop for web-search power users looking for a Google alternative. It aims to give people answers to different types of queries in a range of formats, from movie recommendations to code snippets.Last week it introduced multimodal search—where its chatbot can choose to respond to queries using images or embedded widgets from affiliated apps rather than text—and a feature that lets people share their exchanges with the chatbot, so that others can pick up an existing thread and dive deeper into a query.This week, You.com launched an upgrade that fields questions about live sports events, such as whether the Eagles could still win the Super Bowl with eight minutes left to play. Perplexity—a company set up by former researchers from OpenAI, Meta, and Quora, a website where people ask and answer each other’s questions—is taking search in a different direction. The startup, which has combined a version of OpenAI’s large language model GPT-3 with Bing, launched its search chatbot in December and says that around a million people have tried it out so far. The idea is to take that interest and build a social community around it.The company wants to reinvent community-based repositories of information, such as Quora or Wikipedia, using a chatbot to generate the entries instead of humans. When people ask Perplexity’s chatbot questions, the Q&A sessions are saved and can be browsed by others. Users can also up- or downvote responses generated by the chatbot, and add their own queries to an ongoing thread. It’s like Reddit, but humans ask the questions and an AI answers.Last week, the day after Google’s (yet-to-be-released) chatbot Bard was spotted giving an incorrect answer in a rushed-out promo clip (a blooper that may have...Inferse.com, 5d ago
Increasingly, our standards exceed human capabilities, both physical and cognitive. Nooney notes that “if computers could change how much data a worker could process, then the human body no longer intervened on profitability with its pesky physiological limits.” Similarly, experts now remark on the benefits of using AI—a worker that doesn’t eat, sleep, or require wages. Just as the computer and smartphone have physically distorted the human nervous system and body, taking a considerable toll on our health and well-being, we are told that we have to adapt to the machines—for example, that we need to develop “machine intelligence”—rather than the other way around.The Walrus, 5d ago
Attrition refers to the tendency/rate employees might drop out of an organization. Thankfully, machine learning can help organizations be prepared before an employee leaves the organization by predicting attrition. ML predicts attrition by analyzing large amounts of employee data and identifying patterns and predictors of turnover. The algorithms can collect and analyze employee data, surveys, and HR records to identify contributing factors. After the analysis, the algorithms specify certain features like workload, employee experience, compensation, work-life balance, etc. This way, machine learning can utilize predictive models and real-time monitoring to see which employees will most likely leave the organization.Zephyrnet, 5d ago

Top

A user model can be thought of as a way for a machine to understand human reasoning and its limitations, says De Peuter. “Human mistakes are not random. They are caused by biases and bounds, like limited memory and attention.” When these mistakes are reproduced in a model, it’s a snapshot of realistic human behavior that an AI can use to understand the goals and beliefs of users. A user model can also help overcome the mismatch between how current AI systems approach tasks—as well-defined optimization problems—and open-ended tasks such as design where people generate solutions in steps, adjusting their goals along the way. “Designers don’t produce things in one shot, but in stages, thinking about what they want from the design, and the design itself, in a loop,” De Peuter explains. “People are not machines: their decisions can only be understood in the context of their biases and limitations, and they require contextualised human-intelligible communication. This is why inserting a human decision-maker into an AI framework is so hard.”...FCAI, 16d ago
One of the most significant ethical concerns regarding AI is the potential for bias. AI algorithms are only as unbiased as the data they are trained on. If the data is biased, the AI system will also be biased. Facial recognition technology has been criticized for its bias against people of color and women. In 2018, it was discovered that Amazon’s AI recruitment tool was biased against women, reflecting the biases in the data it was trained on. Software designed to predict future recidivism of parolees reflects the bias in the data about which persons were granted parole in the past. Relying on arrest histories to determine where “high crime” areas might be (predictive policing) reflects where we have traditionally enforced laws (leading to more arrests) and what kinds of crime (drug crimes over financial crimes) we are considering. Rather than being “predictive” these models are biased by past biased decisions.Security Boulevard, 7d ago
...2)Promoting safety and ethics in AI: OpenAI advocates for the responsible development and use of AI technology, including ensuring that AI systems are safe, secure, and free from bias. Advancing the state of the art in AI: OpenAI conducts fundamental research into the underlying principles of AI, with the aim of improving the technology and advancing the field as a whole.Rebellion Research, 15d ago
AI applications have already begun to take root in food and beverage establishments, from managing inventory, counting orders, tracking customers in real-time, and monitoring the number of people at dining tables. These solutions haven't been used to take jobs or replace humans but ease the burdens on employees and free them from more mundane tasks. But one area where I really think we'll really see AI become increasingly leveraged is with order-taking over phone and text.www.fastcasual.com, 20d ago
Some of the key principles include: avoiding creating or exacerbating unequal societies; promoting open communication about AI systems; ensuring that everyone can benefit from AI advances; protecting privacy; and being transparent about algorithmic biases.NASSCOM Community | The Official Community of Indian IT Industry, 17d ago
This research aims to mitigate such harmful effects by proposing a framework to identify different types of biases in AI-generated art – ranging from those related to problem formulation and dataset curation to those concerning algorithm design and evaluation. To the best of our knowledge, this is the first work to uncover biases in generative artworks along various phases of the AI pipeline.FUJITSU Research and Development, 26d ago

Latest

Several routine and repetitive tasks are likely to be automated as AI technology advances, potentially resulting in job displacement and the need for workers to develop new skills and capabilities.bbntimes.com, 5d ago
We assume that countries develop the digital infrastructure and ecosystems required to enable digital ID and gain the value it helps unlock. We believe that digital ID is a foundational set of technologies, pivotal to unlocking the value we quantify but not sufficient—each area of use will require digital infrastructure, applications, and interfaces built by institutions that interact with digital ID users. These include sufficient levels of telecom and electrical coverage, e-government services, digital financial services, digital talent matching and contracting platforms, digital health records, and digital asset registries. Our estimates of potential value from digital ID include the full value that comes from the use cases it could enable. We do not attempt to isolate the incremental value from digital ID alone, since we believe that in most cases this is not possible. For example, we estimate the benefit from expanded credit to borrowers that digital ID can enable, on the understanding that applications for digitally enabled credit scoring and approval would also be a part of that value.McKinsey & Company, 5d ago
Facilities that use heavy equipment to process tree material to produce mulch hugely benefit from in-vehicle systems that enable drivers to boost visual and situational awareness. Panoramic SVS (Surround View System) solutions minimize the risk of costly or fatal accidents associated with work sites that involve pedestrian staff and clients, costly assets, and large vehicles maneuvering through precarious spaces. To achieve those goals, a customized VIA Mobile360 M810 system was deployed, capable of providing a 360° seamless view of the vehicle surroundings, virtually eliminating all blind spots.VIA Technologies, Inc., 5d ago
The theory that hybrid AI use in construction management is beneficial, and cost-efficient has been widely accepted. However, there are potential problems associated with this approach which can lead to costly implications if not managed properly. The most common challenge faced when implementing hybrid AI systems is the lack of data for analysis. As a result, the conclusions derived from such systems may be inaccurate or incomplete due to insufficient input information. Additionally, while hybrid AI approaches offer improved accuracy over traditional methods, they require significant amounts of computing power and resources, which can increase development costs significantly.Planning, BIM & Construction Today, 5d ago
...“Just like modern cars use sensors and cameras to understand the world around it, our droid will have a robust onboard sensor suite, including GPS and visual sensors, which it will use to maneuver and help ensure a delivery site is free from kids, dogs or other obstacles,” Zipline engineering head Joseph Mardell says of the system. Co-founder Keller Rinaudo adds a bit of flare in a statement, claiming, “We have built the closest thing to teleportation ever created — a smooth, ultrafast, convenient, and truly magical autonomous logistics system that serves all people equally, wherever they are.”...TechCrunch, 5d ago
...are particularly important in trucks and buses. Especially with vehicles of this size and weight, it is important to actively avoid accidents as the risk of an accident with serious consequences is high. In order to achieve this, we are developing new safety systems in regular development cycles with which we can protect and assist drivers in their day-to-day work. In addition to the more well-known assistance systems, this also includes technical innovations. You can find out more about the active safety systems here. Our assistance systems also assist drivers while they work, thus helping to prevent accidents. With these systems, our goal is to relieve burdens on the driver, raise his or her alertness, prevent fatigue, and promote an adapted driving style. This can be accomplished through prediction, automation, traffic sign recognition, enhanced all-round visibility, and smart lighting control.daimlertruck.com, 5d ago

Top

Bing AI’s problems were just a glimpse of how generative AI can go wrong and have potentially disastrous consequences. That’s why pretty much every company that’s in the field of AI goes out of its way to reassure the public that it’s being very responsible with its products and taking great care before unleashing them on the world. Yet for all of their stated commitment to “building AI systems and products that are trustworthy and safe,” Microsoft and OpenAI either didn’t or couldn’t ensure a Bing chatbot could live up to those principles, but they released it anyway. Google and Meta, by contrast, were very conservative about releasing their products — until Microsoft and OpenAI gave them a push.Warehouse Automation, 14d ago
Such systems are likely to attract users seeking the comfort of confirmation bias while simultaneously driving away potential users with different political viewpoints—many of whom will gravitate toward more politically friendly AI systems. Such AI-enabled social dynamics would likely lead to further polarization. Our preliminary experiments suggests that customizing AI systems to create intellectual echo chambers requires relatively little data and is technically straightforward and low cost. Political ideology is not the only dimension on which AI models can be fine-tuned. One can envision systems designed to exhibit certain religious orientations, philosophical priors, epistemological assumptions, etc.Manhattan Institute, 7d ago
However, the rise of AI also presents significant challenges. One of the most pressing challenges is the potential job loss due to automation. AI also raises concerns about privacy, security, and the ethical use of data. There is a need to make sure that Artificial Intelligence is developed ethically and responsibly to avoid unintended consequences.blockchain-council.org, 14d ago

Latest

...like Google, Bank of America, GM, IBM, and Tesla have removed the college degree requirement for any positions in their companies. In some states, one can become a teacher at a private school without having an education degree. As AI improves its numerical and linguistic critical thinking skills, companies are likely to incorporate AI into their pre-screening and training of employees. There is also great potential for growth in alternative credential agencies, which can certify students in certain skills, and much will likely be available free online. All these trends challenge the university’s primary status as a...Law & Liberty, 5d ago
AI tools might also allow scholars to publish outside their field or in a genuinely interdisciplinary way, Greene adds. “I was trained in genetics and when our work ends up adjacent to immunology, I always struggle to remember the various clusters of differentiation, perhaps because I think in protein/gene names,” he says. “Maybe the next version [of AI] can transform those manuscripts into terms that I can recall more quickly, reducing the cognitive load.”...Times Higher Education (THE), 5d ago
Under promise and overdeliver. From a holistic standpoint, both private and public entities need to work together to create and adhere to a regulatory framework that works to keep drivers safe. Furthermore, OEMs and auto manufacturers have to place the driver front and centre and craft vehicles that are safe and secure from the conception through the production phase. Autonomous driving is supposed to remove human error from the equation – but if the underlying technology continues to be limited and couple that with prevailing ethical and legal challenges – we still have a ways to go to realizing a new era of mobility.Just Auto, 5d ago
Drone technology has basically escalated the graph and helped in boosting all industries including the utilities. apart from all the industries where drawn inspections are taking place and or are going to take place like we have mentioned in our earlier blog the utilities sector is definitely the one gaining more attention. This is mainly because of the difficult construction as well as the higher trains at which these things are based makes it really tough for human interaction to do inspections. When it comes to drones, they are extremely viable, helpful and accurate while doing inspections as well as providing data that is trustworthy. There are certain major reasons why the drone industry is majorly taking of in the energy sector specifically as compare to the others, let’s dive into the details of understanding why is that so.Pigeon Innovative Solutions, 5d ago
That is why Code Ninjas advocates for teaching students about AI and tools like ChatGPT instead of shielding them from it. These technologies are here to stay, and they will only get better and more ubiquitous. As they evolve, we should encourage students to think beyond computers and teach them high-level tasks that computers cannot currently replicate. Although AI and ChatGPT are still imperfect, their shortcomings can promote critical thinking in the classroom. When students have a deep understanding of the topic they are requesting information from ChatGPT, they can evaluate its responses the same way a teacher would. If schools embrace AI thoughtfully as a teaching aid, one that fosters student creativity, promotes critical thinking, provides personalised tutoring, and better prepares students to work with AI systems as adults, especially in a future where it is expected to create millions of jobs, we will undoubtedly witness the benefits it can bring to 21st-century education.electronicspecifier.com, 6d ago
Because of these flawed heuristics, and because it’s becoming cheaper and easier to produce content, Hancock believes we’ll see more misinformation in the future. “The volume of AI-generated content could overtake human-generated content on the order of years, and that could really disrupt our information ecosystem. When that happens, the trust-default is undermined, and it can decrease trust in each other.”...Stanford HAI, 6d ago

Top

From an innovation perspective, it’s certainly important to understand some of the principles around innovation such as the diffusion of innovation theory and how transformation is adopted in organizations. I think there is also tremendous value in learning human-centered design and how it can so effectively help organizations solve their challenges. And from a technology perspective, we really need to look at our curricula for medical school and nursing school and incorporate not only big data and analytics but also certainly much more regarding AI. Even though we have these very powerful tools, they were built by humans and that means that they are only as good as our limitations. I think as we continue to digitally transform nursing it will allow us to be better nurses and provide better care I am very excited about the potential that brings.AIMed, 13d ago
Project managers are empowered with greater insights into the performance of their projects. By utilising AI algorithms that assess data generated through 5G networks, they can determine which tasks require attention or modification while foreseeing potential problems before they arise. In addition, it enables them to monitor a wide range of parameters, such as the number of workers present at any given time or resources used within each job site. This allows them to take proactive action when needed and ensure that all jobs are completed safely and efficiently.Planning, BIM & Construction Today, 11d ago
The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy.Gizmodo, 22d ago
Tech companies are not thrilled with the proposal. They fear that too much transparency will expose trade secrets and code to competitors.“The bill proposes a number of things that could severely restrict the ability of companies to offer services to state government agencies. First, the bill requires the Office of AI to produce detailed reports including how data was collected, generated, or processed. It’s unclear how such a report could be generated without compromising proprietary information,” said Christopher Gilrein, the executive director of the northeast branch of TechNet, a national network of “technology CEOs and senior executives.”Gilrein added that the bill’s language could produce unintended restrictions due to the dynamic nature of the rapidly emerging AI field.“It’s difficult to provide actionable intelligence right now. This technology is advancing faster than we have the policy language to describe it,” Gilrein said. “The scope of the bill could conceivably cover any decision that is aided in any way by technology. These artificial intelligence, automated decision-making systems, these are all tools that exist on a spectrum.”But watchdog researchers of the Media Freedom and Information Access Clinic at Yale Law School say that the current transparency requirements are woefully inadequate.Danny Haidar, a law student at the clinic, said the information researchers uncovered while studying the state’s AI was “disturbing.”“The use of algorithms has spread throughout Connecticut’s government rapidly and largely unchecked,” Haidar said. “Algorithms are now used to make decisions on many consequential matters, including assigning students to magnet schools, allocating police resources, setting bail, and distributing welfare benefits.”The...Governing, 15d ago
Yet, for innovations that rely heavily on data and know-how, for which it can be challenging to obtain patents, trade secret protection is an important aspect of IP protection. For example, certain aspects of new model-training methodologies, optimising model parameters, negative know-how (i.e., what not to do), and many other data-driven aspects of AI systems may face an uphill climb given current patent eligibility requirements. For such innovations, even if you obtain a patent, it can also be difficult to establish infringement by a competitor who you suspect is using your technology. For those kinds of innovations, trade secrets can provide robust IP protection if proper steps are taken to ensure confidentiality.MassTLC, 14d ago
Likewise, aptitude testing is unlikely to benefit from Generative AI. It will bring the risk of systematic biases. Some of the existing aptitude testing tools, in particular those that are not relying on any type of knowledge, will continue to deliver better results than anything that Generative AI has to offer.MassTLC, 14d ago

Latest

Ultimately, the effects of automated decision-making and task-flow management must be integrated with constraint-based optimization. Doing so allows complex operations to be automatically balanced with an order flow that maximizes SLA attainment, efficiency and throughput. The goal of an automated system making decisions in real time is to achieve the so-called Goldilocks approach, which says, "I'm going to get the best I can possibly get, given everything I have." This is something an automated system can do in a way that a human cannot.For decades, warehouse managers have relied on WMS systems to provide the best available support for human decision-making. It’s been “good enough,” absent significant advances in computing technology. Now that the technology is available, it’s time to review and adopt new automated warehouse decision-making systems.For managers, this will reduce the frustration of making suboptimal decisions, and allow them to move to higher functions.Resource Link:...supplychainbrain.com, 6d ago
...“AI has a voracious appetite for data, but because of privacy concerns, it’s a challenge to get access to large volumes of medical data and medical equipment data required to drive AI development. Luckily, we have very good, very precious partner research relationships around the world, and we employ different techniques to respect and maintain strict privacy requirements. But typically, these were small puddles of data being used to try to drive AI initiatives, which is not the ideal formula.VentureBeat, 6d ago
Up until then, prior efforts to release generative AI applications to the general public were typically met with disdain and outrage. The basis for the concerns was that generative AI can produce outputs that contain all manner of foul outputs, including profane language, unsavory biases, falsehoods, errors, and even made-up facts or so-called...BitcoinEthereumNews.com, 6d ago

Latest

.... Each individual is different, and a "one-size-fits-all" learning course doesn’t work. For example, if one employee finds visual learning more attractive than textbook manuals, another may prefer one-on-one learning to group presentations. Training each individual according to their liking will benefit them. With personalized learning, employees can benefit in many ways. And for organizations, a well-trained workforce is a productivity boost. This is where AI in eLearning comes in handy. It takes a "one-size-fits-one" approach, automates the delivery of personalized learning experiences, and helps educational institutions and businesses respond quickly to the growing learning needs of a digital workforce.eLearning Industry, 6d ago
When a driver is trying to multitask, researchers who study the psychology and mechanics of driving tend to evaluate their distraction based on whether their eyes are coming back to the road often, and for long enough, to reestablish a sense for where their car and other vehicles, cyclists, and pedestrians are in space. Driver-monitoring systems may eventually be able to combine information from the car’s many sensors to, for example, determine that a driver isn’t sufficiently paying attention when their vehicle is about to be T-boned, and tighten their seat belt.WIRED, 6d ago
...technology holds the potential to automate some routines and change the nature of certain professions. However, it’s important to remember that AI systems are merely tools designed to assist and enhance human abilities, not supplant them. If all this still has you a bit worried, then keep reading. Here, we’ll offer some insight into the capabilities of ChatGPT as well as some tips for future-proofing your career in the coming age of AI.Reader's Digest, 6d ago
One of the biggest worries for financial institutions and their customers is financial crime and fraud. AI and ML can help here by facilitating enhanced risk detection and management through connecting case management tools with current fraud screening methods already in play. As part of an intelligent automation approach, AI and ML tools can also help banks more efficiently screen transactions for anomalies to improve detection and management of financial crime. This ensures security and protection of banks due to an encouragement of resilient operational processes, creating a strong backbone for their backend systems.CoinGenius, 6d ago
Artificial intelligence is an ideal tool for data mining when it comes to determining an applicant’s creditworthiness for a loan or credit card. The information from a person’s credit history is posted digitally, helping financial institutions make faster decisions. Using AI for this task has also led to the rise of fintech firms that consider other non-conventional data including school of study or field of study, employment history, and spending habits. Using this information, loans can be almost instantaneously approved, all without the need for the applicant to visit a bank. Small and medium-size financial institutions can level the playing field with larger competitors by employing AI and ML to harvest the data they need (an alternative to FICO scores) without hiring additional staff to perform those analytical tasks or investing in on-premise server platforms and banks of computer stations. Using cloud-based services (AWS, Microsoft Azure, Google, etc.) to assist with financial tasks will also facilitate and expedite decision-making.Financial IT, 6d ago
Ultimately, he says, AI systems should be supporting organizations to make better decisions. Regarding diversity (the third D), it’s up to the organisation to define fairness, morality, privacy, transparency and explainability. “The future lies in humans and machines working together to advance society rather than just us being dependent on machines with AI,” Antic says. “It’s really about integration and working in this human-machine relationship, which I think is the core.”...AiThority, 6d ago

Latest

Others expressed concerns about the erosion such machine learning and AI tools might precipitate other humanistic endeavors. One reader wrote: “Meanwhile, at many universities, humanities departments are being hollowed out. What Chomsky is describing here is a fundamental need for human-centered learning in history, philosophy, political science, languages, anthropology, sociology, psychology, literature, writing, and speaking. Those exact programs are being slashed right now by presidents, provosts, and deans at many universities. These corporate-minded administrators care more about the bottom line than actually educating students for the world they will live in. AI will be a useful tool, but it’s not a replacement for a human mind and an education in the humanities.”...TodayHeadline, 6d ago
That conference in Puerto Rico was a watershed moment for concern about existential risk from AI. Substantial agreement was reached and many participants signed an open letter about the need to begin working in earnest to make AI both robust and beneficial. Two years later an expanded conference reconvened at Asilomar, a location chosen to echo the famous genetics conference of 1975, where biologists came together to pre-emptively agree on principles to govern the coming possibilities of genetic engineering. At Asilomar in 2017, the AI researchers agreed on a set of Asilomar AI Principles, to guide responsible longterm development of the field. These included principles specifically aimed at existential risk:...lesswrong.com, 7d ago
As AI systems become more advanced and integrated into our lives, they may be tasked with making complex decisions involving human values and ethics. There is a risk that these decisions may not align with societal norms or values, leading to morally questionable outcomes.Product Hunt, 7d ago
OpenAI has been concerned with how development and deployment of state-of-the-art systems like GPT-4 could affect the broader AI research and development ecosystem.23 One concern of particular importance to OpenAI is the risk of racing dynamics leading to a decline in safety standards, the diffusion of bad norms, and accelerated AI timelines, each of which heighten societal risks associated with AI. We refer to these here as acceleration risk.”24 This was one of the reasons we spent eight months on safety research, risk assessment, and iteration prior to launching GPT-4. In order to specifically better understand acceleration risk from the deployment of GPT-4, we recruited expert forecasters25 to predict how tweaking various features of the GPT-4 deployment (e.g., timing, communication strategy, and method of commercialization) might affect (concrete indicators of) acceleration risk. Forecasters predicted several things would reduce acceleration, including delaying deployment of GPT-4 by a further six months and taking a quieter communications strategy around the GPT-4 deployment (as compared to the GPT-3 deployment). We also learned from recent deployments that the effectiveness of quiet communications strategy in mitigating acceleration risk can be limited, in particular when novel accessible capabilities are concerned.lesswrong.com, 7d ago
Despite its many flaws, AI is clearly poised to revolutionize research and education.Brison and other Dartmouth faculty are already pondering ways to bring ChatGPT to their classrooms. “It could be useful to get students to think in new ways about writing and examine what goes into writing a good essay, whatever the genre is,” Brison says.“We’ve had people teach courses on the creative use of computing in the past and I could certainly see that coming back with renewed interest now,” says Dobson.AI systems are already hard at work on campus. Marvis, an AI-driven network assistant that diagnoses and responds to problems in Dartmouth’s wireless networks, is one example.Inferse.com, 7d ago
...regulatory compliance risk factors to investors. Such a mathematical model could translate directly into a reward function, a grading system that could provide feedback for the model used to create policy proposals and direct the process of training it.The real challenge in impact assessment for generative AI models would be to parse the textual output of a model like ChatGPT in terms that an economic model could readily use. Automating this would require extracting structured financial information from the draft amendment or any legalese surrounding it. This kind of information extraction, too, is an area where AI has a long history; for example, AI systems have been trained to recognize...Governing, 7d ago

Latest

She added that privacy, security and human-centric approaches must be components of a cohesive AI strategy. It’s becoming increasingly important to manage rights over personal data and when it is fair to collect or use it. Security practices around how AI could be misused or impacted by bad-faith actors pose concerns.VentureBeat, 7d ago
...– [Philippe] Well, so, if you think about it, right? So, if you think about it, there are many technology at play in the age of autonomy, right? So, IoT is gonna collect data in real-time, and then, also, send insight about what to do to those objects, or to the space. Age two cloud is gonna enable the cost effective and sustainable and loop-latency processing. You’re gonna have digital twins, A.I. simulations, blockchain, all those technologies are gonna be at play. And so, if you think about specifically IoT, right? IoT is playing a major, it’s a major layer of that new infrastructure. And, on a macro basis, more device are becoming connected and those device are becoming smarter. They are also becoming cheaper and more sustainable with technologies, emerging technologies, like Ambient IoT. I don’t know if you’ve covered that in your podcast. But, I think it’s an interesting trend to look at, and that is emerging now. then, you know if you’re thinking about where we are, right? So, as we discussed, right? We need to connect, we need to create intelligence space, we need to connect the device and make them smarter. We need to connect with the cloud, and then we need to, with the platforms into those in intelligence spaces. And, if you think about it, right, the use case have emerged now to create intelligent and connected space, such as factories, stores, hospitals, and you know, this vision is not far away, right? Like, for instance, Kroger, today, partners with Nvidia to optimize the efficiency of the store through the use of digital twins that are updated in real time with customer data moving in the store. Amazon in same thing, is now, you know automating their entire warehouse, and creating a same-day delivery approach while keep, while being at zero carbon emissions. So, all of us are com- are intelligent spaces, intelligent stores, so those are happening, right? So, so we need sensors, we need IoT, and IoT is being implemented there. Where there is less of, of a growth at this point, and we are not seeing the use case yet, but they are gonna come for sure as virtualization technology on device are taking hold, and TinyML is also taking hold, is on the intelligent objects, right? So, so it’s happening, but it’s not happening as fast as the intelligent spaces.IoT For All, 7d ago
This virtuous cycle will only get faster and faster. Once quantum computing comes of age, superfast computers will allow for the processing of ever-larger amounts of data, producing ever-smarter AI systems. These AI systems, in turn, will be able to produce breakthrough innovations in other emerging fields, from synthetic biology to semiconductor manufacturing. Artificial intelligence will change the very nature of scientific research. Instead of making progress one study at a time, scientists will discover the answers to age-old questions by analyzing massive data sets, freeing the world’s smartest minds to devote more time to developing new ideas. As a foundational technology, AI will be critical in the race for innovation power, lying behind countless future developments in drug discovery, gene therapy, material science, and clean energy—and in AI itself. Faster airplanes did not help build faster airplanes, but faster computers will help build faster computers.Foreign Affairs, 7d ago

Top

Second, China and the U.S. should play a leading and exemplary role in strengthening international cooperation and in promoting regulatory principles and common standards to ensure the safe application of AI. Along with the deep integration of AI with various industries, its future development depends not only on technical break-throughs but also on dealing with derived legal and institutional problems, such as deployment security, developer obligations and rights, end-user rights and development standards. Additionally, in the process of acquisition, storage, analysis, reproduction and dissemination of data, privacy issues could become a time bomb if rules and institutions are not set.China-US Focus, 22d ago
.... One question has to be asked according to Michael Jürgens: “Against the background of personal safety, are my environmental conditions even suitable for the integration of a driverless transport system?” This should be checked at the very beginning with a system integrator, since there would have to be prerequisites that could not be ignored. An example: If the space requirement is not sufficient – for example in a transfer station or at narrow crossings and passing points – and it is determined too late, the entire plant often has to be rebuilt. Thomas Visti pointed out how safe DTS solutions are today: “Equipped with powerful sensors and modern security algorithms, our MiR robots are able to detect people, objects and other driverless transport systems in good time. Depending on the situation, they will either avoid them or stop. In many cases, they are already replacing forklifts, which are often involved in accidents."...TODO, 11d ago
...“Professionally, I greatly admire parts of Steve Jobs and parts of Elon, specifically their ability to cut through noise and focus on the parts of a product that matter: the experience. Technology by itself can be cool, but it’s much more meaningful when it solves a real problem in the world. These lessons have helped me greatly throughout my career, which started with incredible complexity: my first tasks were to wrangle entire electrical systems for cars. Think of a human nervous system, but now add a hundred different vendors with their own ideas, marketing plans for markets on six continents, and a central push to lower costs as much as possible. It was a great start in the automotive industry. Through my stops at Tesla and Waymo both the complexity and challenge have grown immensely, and I’ve grown alongside them, applying the lessons from people I admire to my own work.”...manufacturingdigital.com, 11d ago
...in Ottawa. The non-profit organization is working to advance the responsible adoption of AI systems through the development of a certification program. “There are a ton of potential opportunities,” she adds — but only if there is rigorous oversight and inclusivity along every step of the way.stcatharinesstandard.com, 18d ago
The integration of flying taxis in a safe manner into the airspace system with a combination of autonomous and crewed operations raises considerations such as air traffic controller workload, communication delays, emergency management, voice-to-text technology, mixed equipage operation, and uncooperative aircraft, according to Garcia. “Legacy air traffic management systems were not designed for the automated aircraft of tomorrow. In the future UAM environment human intervention and coordination will be the exception, not the norm,” he affirms. “Automation will support a high level of safety, security, capacity, and scalability, while extending the capabilities of people. Establishing new, desirable, and legitimate roles for personnel that complements an automated, intelligent, data-powered system is key to the evolution of the aviation sector and its talent.”...Aerospace Tech Review, 20d ago
It’s evident that the financial constraints facing hospitals will not allow for humans alone to fill this growing gap. While technology innovation – including AI – is not new in healthcare, previous solutions felt too few and too limited or constrained to truly make an impact. Now, by turning to AI that can sense what is happening – and pairing it with cost-effective virtual care models – we can integrate with existing clinical workflows to support and extend the reach of nurses and doctors.Healthcare Business Today, 16d ago

Latest

Yes, and no. Researchers are in real time highlighting the problems of new developments in the field of big data and artificial intelligence. Questions such as deep fakes – artificially created pictures through tools, such as Dall-E 2- or automatic text generation such as ChatGPT are well known to the research community, who also explore their potential risks and threats. Likewise, a large group of researchers rang the alarm bell about the dangers of political manipulation well before the Cambridge Analytica scandal became public knowledge. The fundamental question is not so much that the ethics are lagging behind, but whether policymaking is swift and powerful enough to prevent the misuse of digital technologies. Ongoing developments, such as the governmental and private tracking of citizens, even if the purpose is health and well-being, poses the risk of putting society under a restrictive tutelage; thus, deciding every individual’s best course of action. Individual freedoms in a democratic society are at risk.E-International Relations, 7d ago
Healthcare is starting to embrace an older AI technology of robotic process automation (RPA) for administrative tasks that can be automated by algorithms rather than completed by humans, but for data scientists, this past decade has been a journey into healthcare with mixed dividends. While the aspiration to help improve patients’ lives and/or create a viable business venture was a driving force for artificial intelligence experts, the nuances of access to healthcare data and inadequacies of databases was a deterrent for some. For clinicians at all levels of education and training as well as practice, there is an escalating need to learn about the basics of AI as it is becoming more evident that those clinicians who understand AI will have a growing advantage over those who do not.AIMed, 7d ago
...is where risk management meets ethical standards. The policies and practices manage how your people, software, and machines use and apply data — and now they must factor in AI and ML. You don’t want to be that executive splashed all over the headlines when your AI’s use of unreliable data, without proper controls, causes damage to your brand, your business, or worst of all, your customer relationships.Acceleration Economy, 7d ago
...“Areas of ethical concern related to AI, generative AI -- there is the classic and not well-solved-for challenge of structural bias,” says Lori Witzel, TIBCO Software’s director of thought leadership, referring to bias in systems by which training data is gathered and aggregated. This includes historic legacies that can surface in the training data.InformationWeek, 7d ago
Google faced a major backlash in 2020 when they fired ethical AI researcher Timnit Gebru after she published a paper critical of the large language models. This incident has highlighted the need for companies to prioritize ethical considerations when developing AI tools. The departure of several top leaders within the department and damaged the company’s credibility on responsible AI issues. Microsoft’s focus on shipping AI tools quickly has shown that companies need to be mindful of ethical considerations when developing AI tools. Google’s ethics and society team had been supportive of product development, but the company’s leadership had become less interested in the team’s long-term thinking. This incident has shown that companies must prioritize ethical considerations when developing AI tools to ensure responsible use of AI technology.Boxmining, 7d ago
First, a few quick definitions. Real-time data involves a continuous flow of data in motion. It’s streaming data that’s collected, processed, and analyzed on a continuous basis. Streaming data technologies unlock the ability to capture insights and take instant action on data that’s flowing into your organization; they’re a building block for developing applications that can respond in real-time to user actions, security threats, or other events. AI is the perception, synthesis, and inference of information by machines, to accomplish tasks that historically have required human intelligence. Finally, machine learning is essentially the use and development of computer systems that learn and adapt without following explicit instructions; it uses models (algorithms) to identify patterns, learn from the data, and then make data-based decisions.Warehouse Automation, 7d ago

Top

It is also important for companies and organizations to be transparent about their use of AI and the potential impact on the workforce. They need to communicate with employees about how AI will be used in the workplace and what jobs may be affected. It also means being open to feedback and suggestions from employees about how AI can be used to improve their jobs and the general work environment.E-Crypto News - Exclusive Cryptocurrency and Blockchain News Portal, 12d ago
That reliance on AI systems, even pretty dumb ones, can potentially blind people to whole new sets of errors or biases presented by the seemingly objective machines. ChatGPT and its compatriots could make those issues far worse due to their pesky habit of confidently blurting out blatant bullshit as truth. AI researchers call these algorithmic lies “hallucinations,” or, as Kissinger notes, “stochastic parroting.”...Gizmodo, 17d ago
And this shifts the focus to another issue that is often overlooked when talking generically about robots: the human-machine interface, the so-called AI-HRI (Artificial-Intelligence for Human-Robot Interaction). AI-HRI is the functional system that allows man and robot to communicate with each other and work productively as a team. The quality of operations obviously depends on the human’s ability and the robot’s level of autonomy, but nothing could take place without AI-HRI’s adaptive functions. From the simple movement of a joystick to a voice command to the perceived human movement processed and reproduced by a robot, the different forms of AI-HRI unfold under the influence of the operational environment in which each entity finds itself. But it must be considered that it is not true that robotic autonomy shared with humans is the only possible future, just because autonomous technologies are not sufficiently mature to date.Geopolitical Futures, 26d ago