Latest

new The idea that we live in a permanent AI revolution means that companies’ transformation efforts are most likely to succeed when designed with a dual intent: successful adoption of mature technologies and readiness for accelerated experimentation with inchoate ones. Since companies continue to learn at a slower rate than technology advances, success will largely hinge on a business’s relative rate of learning—which, in turn, depends on their ability to become early adopters of the foreseeable technologies on the horizon. Today, for companies working on the adoption of stand-alone LLMs, this challenge takes the form of shaping those LLM-based transformation plans with an eye towards the arrival of what’s coming next—autonomous agents.Fortune, 8h ago
new Request a free sample copy in PDF or view the report summary: https://www.expertmarketresearch.com/reports/telemedicine-market/requestsampleTelemedicine Market OverviewUnderstanding Telemedicine This encompasses a wide range of services, from virtual doctor consultations to remote patient monitoring and telepharmacy. Telemedicine eliminates the need for physical presence, making healthcare services accessible to individuals globally.Market Size and Growth The telemedicine market achieved a substantial market size of USD 73.1 billion in 2023 and is poised to continue its growth journey with a CAGR of 19.3% from 2024 to 2032, ultimately reaching a staggering USD 377.0 billion by 2032. This remarkable growth can be attributed to several key factors, which we will explore in detail.Telemedicine Market DynamicsTechnological Advancements The rapid evolution of technology is a driving force behind the telemedicine boom. High-speed internet, smartphones, wearable devices, and improved telecommunication infrastructure have all played pivotal roles in making remote healthcare services accessible. Telemedicine platforms now boast high-quality video and audio capabilities, ensuring seamless communication between patients and healthcare providers.Increased Adoption of Teleconsultation The widespread acceptance of teleconsultation has been steadily increasing. Patients have come to appreciate the convenience and accessibility of virtual appointments, particularly for non-emergency consultations. The COVID-19 pandemic further accelerated this trend, highlighting the importance of remote healthcare services.External Telemedicine Market TrendsChanging Regulatory Landscape Governments and regulatory bodies worldwide are adapting to accommodate telemedicine. They are implementing policies and regulations to ensure patient safety, data privacy, and the growth of telehealth services. Staying informed about these evolving regulations is crucial for telemedicine providers.Remote Monitoring and IoT Integration The integration of Internet of Things (IoT) devices into telemedicine has opened up new possibilities. Remote monitoring of vital signs and health parameters enables proactive healthcare management. Patients can transmit real-time data to healthcare professionals, leading to more accurate diagnoses and treatment adjustments.Explore the full report with the table of contents: https://www.expertmarketresearch.com/reports/telemedicine-marketTelemedicine Market SegmentationPatient Demographics Telemedicine serves a diverse range of patients, from tech-savvy individuals to the elderly and those residing in remote areas with limited healthcare access. Understanding these demographics is vital for tailoring services effectively.Specialty Areas Telemedicine extends beyond general consultations to various specialty areas, including telepsychiatry, teledermatology, teleoncology, and more. Each specialty has unique requirements and considerations, necessitating market segmentation.Telemedicine Market GrowthGlobal Expansion Telemedicine knows no geographical boundaries. Its reach is expanding worldwide, with healthcare providers, tech companies, and startups entering the market from different corners of the globe. This global expansion is contributing significantly to the industry's rapid growth.Improved Patient Outcomes Research indicates that telemedicine can lead to improved patient outcomes. Timely consultations, continuous monitoring, and better access to healthcare professionals contribute to early diagnosis and effective management of various medical conditions.Recent Developments in the Telemedicine MarketTelemedicine Platforms Telemedicine platforms are continually evolving to offer more features and capabilities. Many now integrate electronic health records (EHRs), prescription management, and secure patient messaging, enhancing the overall patient experience.AI and Telemedicine Artificial intelligence (AI) is making its presence felt in telemedicine. Machine learning algorithms are being employed to analyze medical data, predict patient outcomes, and enhance diagnostic accuracy. The integration of AI promises to revolutionize telemedicine further.Telemedicine Market ScopePatient Convenience Telemedicine offers unparalleled convenience to patients. They can schedule appointments at their convenience, eliminating the need for lengthy commutes and extended wait times in crowded waiting rooms.Cost Savings Telemedicine presents cost savings for both patients and healthcare providers. Patients save on travel expenses and time, while healthcare providers can optimize their resources more efficiently.Telemedicine Market AnalysisKey Players The telemedicine market boasts a diverse array of key players, including established healthcare institutions, technology firms, and startups. Prominent players include Teladoc Health, Amwell, Doctor on Demand, and numerous others. These companies offer a wide array of telehealth services and continue to innovate in the field.Patent Analysis Analyzing patents is crucial to understanding the technological innovations propelling the telemedicine market. It offers insights into the key players' areas of focus and hints at potential future developments.Grants and Funding Monitoring grants and funding within the telemedicine sector provide valuable insights into market trends and growth areas. Government support and private investment often signify confidence in the market's potential.Clinical Trials Clinical trials within the telemedicine realm are essential for validating the efficacy and safety of remote healthcare solutions. Keeping abreast of ongoing trials can provide valuable information about emerging telemedicine treatments and technologies.Partnerships and Collaborations Partnerships and collaborations among telemedicine providers, healthcare organizations, and technology companies are commonplace. These alliances often result in innovative solutions and expanded service offerings.FAQ: Addressing Common Questions1. Is telemedicine as effective as in-person visits? Telemedicine has proven highly effective for many types of consultations and follow-ups. However, certain cases necessitate physical examinations or procedures, mandating in-person visits.2. Is telemedicine secure and private? Telemedicine platforms prioritize security and privacy, employing encryption and adhering to stringent data protection regulations to safeguard patient information.3. How can I access telemedicine services? Accessing telemedicine services is straightforward. Many healthcare providers have their telemedicine platforms or collaborate with established telehealth companies. Patients can typically schedule appointments through websites or mobile apps.4. Will insurance cover telemedicine consultations? Insurance coverage for telemedicine varies by provider and policy. Many insurance companies now offer coverage for telehealth services, but it's essential to verify specific plan details.Related Report:Surgical Robots Market...openPR.com, 12h ago
new Breaking down silos is only possible when leaders focus on cross-functional collaboration, particular in response to organisational change. Building a strong, collective approach to people development is a key way to make this a reality – ensuring employees are adequately prepared for any changes that come their way by working together with new colleagues and developing an understanding of each person’s unique skill set. Even amid organisational shifts, such as the adoption of generative AI, setting a firm foundation for effective collaboration will help a business move through any challenging periods with agility.The European Business Review, 15h ago
new Download Free Sample of Report - https://www.globalinsightservices.com/request-sample/GIS25711/?utm_source=pranalipawar&utm_medium=Openpr&utm_campaign=04122023Security scanning equipment is typically composed of several components including scanners, detectors, and monitors. Scanners are used to detect and identify potential threats, such as malware and viruses. Detectors are used to look for signs of malicious activity, such as unauthorized access to a system or network. Monitors are used to constantly monitor for suspicious activity and alert administrators of any potential threats.Security scanning equipment is essential for any organization that wants to protect its data and systems. It helps organizations detect malicious activity and respond quickly to potential threats. It also helps to reduce the risk of data breaches and other security incidents. Security scanning equipment is an important part of any security strategy and should be implemented in order to ensure the safety and security of an organization's data and systems.Key TrendsSecurity scanning equipment is a broad term that encompasses a wide variety of devices used to detect, identify, and prevent security threats. The technology has been evolving rapidly in recent years, as organizations strive to keep up with the ever-changing security landscape. In this article, we will discuss some of the key trends in security scanning equipment technology.First, the use of biometrics is becoming increasingly popular. Biometric authentication is a process whereby a person's physical characteristics, such as a fingerprint or iris scan, are used to authenticate their identity. This technology is becoming more common in many industries, and is being used to secure areas, as well as to verify transactions.Second, the use of facial recognition technology is also growing. This technology uses facial recognition algorithms to identify individuals and can be used for a variety of security purposes. It is becoming increasingly common in public places, such as airports and stadiums, as well as in corporate environments.Third, the use of artificial intelligence (AI) is becoming more prevalent in security scanning equipment technology. AI can be used to identify and alert security personnel to potential threats before they occur. It can also be used to analyze large amounts of data quickly and accurately, allowing for better decision-making and faster response times.Finally, the use of cloud-based security scanning solutions is becoming more popular. With cloud-based security solutions, organizations can access their security systems from anywhere in the world. This allows for greater flexibility and scalability, as well as faster response times.These are just some of the key trends in security scanning equipment technology. As the security landscape continues to evolve, organizations must continue to stay ahead of the curve by using the latest technology available to them. By doing so, they can ensure that their security systems are up to date and can effectively protect their organization from any potential threats.Key DriversSecurity Scanning Equipment Market is driven by the increasing need for security and surveillance in the public and private sector. The rising number of threats to national security, as well as the need for quick and accurate detection of potential threats has created a strong demand for security scanning equipment. As a result, the market has seen a steady growth over the past few years.The first key driver of the security scanning equipment market is the government's increased focus on security. Governments around the world are investing heavily in security measures, and this includes the procurement of scanning equipment. This is especially true in developed countries, where governments have implemented stringent security measures to protect their citizens. For instance, the United States has adopted a see something, say something approach to security, which requires citizens to report any suspicious activity to law enforcement. As a result, the demand for security scanning equipment has increased significantly.Report Overview- https://www.globalinsightservices.com/reports/security-scanning-equipment-market/?utm_source=pranalipawar&utm_medium=Openpr&utm_campaign=04122023The second key driver of the security scanning equipment market is the rise of terrorist activities. Terrorists have become increasingly sophisticated in their use of technology to carry out their attacks. As a result, governments and private companies are investing heavily in the development of advanced scanning equipment to detect and prevent these attacks. This has led to a strong demand for security scanning equipment, as these devices are able to detect and identify potential threats quickly and accurately.The third key driver of the security scanning equipment market is the development of new technologies. Advances in technology have enabled the development of advanced scanning equipment, which has made it easier to detect and identify potential threats. For instance, the use of 3D imaging technology has enabled the development of devices that can detect objects hidden within walls and other structures. This has made it easier for law enforcement and private companies to detect and identify potential threats quickly and accurately.The fourth key driver of the security scanning equipment market is the increasing demand for safety and security in public spaces. With the recent increase in mass shootings and other public safety incidents, governments and private companies are investing heavily in the development of advanced scanning equipment to detect and prevent these incidents. This has led to a strong demand for security scanning equipment, as these devices are able to detect and identify potential threats quickly and accurately.Get a customized scope to match your need, ask an expert - https://www.globalinsightservices.com/request-customization/GIS25711/?utm_source=pranalipawar&utm_medium=Openpr&utm_campaign=04122023Finally, the fifth key driver of the security scanning equipment market is the increasing use of biometric technologies. Biometric technologies allow for the identification of individuals through their unique physical characteristics. This has made it easier for law enforcement and private companies to identify potential threats quickly and accurately. As a result, the demand for security scanning equipment has increased significantly.Market SegmentationThe Security Scanning Equipment Market is segmented into Detection Technology, Application, End User, and Region. On the basis of Detection Technology, the Security Scanning Equipment Market is segmented into X-ray, CT-based, Neutron Sensing and Detection, and Others Detection Technologies. Based on Application, the market is bifurcated into Mail and Parcel and Baggage Scanning. Based on End User, the market is segmented into Airports, Ports and Borders, and Defense. Region-wise, the market is segmented into North America, Europe, Asia-Pacific, and Rest of the World. Key PlayersSome of the key players of Security Scanning Equipment Market are Smiths Detection Inc. (UK), Leidos Holdings Inc. (US), OSI Systems Inc. (US), 3DX-Ray Ltd (US), Teledyne ICM SA (US), Analogic Corporation (US), Nuctech Company Limited (China), Astrophysics Inc. (US), CEIA SpA (Italy), and Gilardoni SpA (Italy). Buy Now - https://www.globalinsightservices.com/checkout/single_user/GIS25711/?utm_source=pranalipawar&utm_medium=Openpr&utm_campaign=04122023With Global Insight Services, you receive:10-year forecast to help you make strategic decisions...openPR.com, 15h ago
new The conference kicked off with keynotes and comments from Black Hat and DEF CON founder Jeff Moss and Azeria Labs founder Maria Markstedter, who explored the future of AI risk — which includes a raft of new technical, business, and policy challenges to navigate. The show features key briefings on research that has uncovered emerging new threats stemming from the use of AI systems, including flaws in generative AI that makes them prone to compromise and manipulation, AI-enhanced social engineering attacks, and how easily AI training data can be poisoned to impact the reliability of the ML models that depend on it. The latter, presented today by Will Pearce, AI red team lead for Nvidia, features research for which Anderson was a collaborator. He says the study shows that most training data is scoured from online sources that are easy to manipulate.darkreading.com, 17h ago
new Change is happening at a rate like never before thanks to the popularisation of AI, along with the ever-increasing pace of organisational change. In fact, according to recent CoachHub research, more than half (56%) of organisations are already using AI within their HR function. As AI’s role continues to develop, and skillsets continue to change, it is vital that leaders prepare early, ensuring that their workforce is equipped with the skills and resources required for the trends of the coming year.theHRDIRECTOR, 17h ago

Latest

new Asked to imagine a dream care robot, our workshop participants envisioned something that could “grow together” with a human individual, paying attention to their specific dignity. While only one perspective, our experiments with language models demonstrated just how far present-day AI remains removed from it. Simply rotating common religious, disability, and gendered terms produced radically different associations and sentimental attachments. A little removed from the research period now, we might admire the technical progress in language model capability and alignment today. However, we would maintain that the goals of AI remain doggedly determined by largely ableist, white, male, and economically-interested fantasies. “Intersectionality” means not only the removal of stigmas and stereotypes from AI outputs but also the proliferation of different cultural perspectives and values into its very design and aims.Montreal AI Ethics Institute, 1d ago
new Policymakers are increasingly considering risk-based assessments of AI systems, such as the EU AI Act. We believe that in this context, AI systems with the potential for deception should be classified at least as “high-risk.” This classification would naturally lead to a set of regulatory requirements, including risk assessment and mitigation, comprehensive documentation, and record-keeping of harmful incidents. Second, we suggest passing ‘bot-or-not’ laws similar to the one in California. These laws require AI-generated content to be accompanied by a clear notice informing users that the content was generated by an AI. This would give people context about the content they are viewing and mitigate the risk of AI deception.Montreal AI Ethics Institute, 1d ago
new From the insights provided in the article, it’s important to acknowledge that AI is advancing to a level of sophistication capable of mimicking human creativity. However, it’s crucial to acknowledge the potential substantial differences between these two forms of creativity, originating from the disparate operational mechanisms of the human brain and generative AI. While divergent thinking is often explored as an important factor of creativity, it is not the complete picture of the multifaceted nature of creative behavior.Montreal AI Ethics Institute, 1d ago

Top

The "dystopian echo chamber" in AI's informational corpus is a real concern that underscores the need for a balanced approach in AI development. By addressing this issue, we can ensure that AI develops in a way that is reflective of the full spectrum of human experience and values, and tat it is better equipped to serve the varied needs of society. The future of AI should be shaped not by our fears, but by our hopes and aspirations.Psychology Today, 8d ago
The changing concept of “testimony” has become increasingly central to understanding forms of social and political violence. As the notion of “machine testimony” evolves in the era of AI, how can human witnessing continue to enrich the analysis of our transformative realities across disciplines and cultural contexts? Witnessing can also help address socio-cultural crisis (ecological, political and migratory, gender- based violence, COVID19- pandemic), as it is dealt with in literature, film, and the arts. The Experimental Humanities Collaborative Network, the Center of History, and the Institute for the Arts & Creation at Sciences Po are co-sponsoring a symposium on May 30-31 at Sciences Po in Paris, investigating how the uses, targets, and moral responsibilities of witnessing have shifted in a context where cultural manifestations often blur the boundaries between journalism and literature, historiography and fiction writing, or judicial claims and forms of artistic performance. How do testimonial works circulate across linguistic and cultural boundaries, and what does their translation entail? How do audiences, whether ordinary readers and spectators, editors, historians, or scholars of literature, film, and the arts, receive these hybrid testimonies? The organizers welcome proposals for papers and panels from the OSUN community that analyze the idioms, epistemologies, and temporalities of testimony through literature, film and/or the arts, and propose a critical reflection on how testimonies can be apprehended across cultures and disciplines. We encourage the adoption of perspectives that respect the specificity of testimonial practices, uses, and objects. We will also consider current forms of production of life stories, their modes of sharing, archiving, and publication, through the use of old and new media, including digital and analog forms of writing, photographs and video, epistolary exchanges, newspapers, literature, theatre, film and the arts, with an emphasis on the fundamental role that the creation of archives and memory-making play across our societies, from a political and ethical standpoint. This in-person event aims to bring together scholars and researchers in the humanities, history, the social sciences, human rights, and the arts, as well as “witnesses” from the artistic, judicial, and literary arenas who might also stand at the intersection of fields and postures.“Witnessing through Literature and the Arts: A Transdisciplinary Symposium” will be held at Sciences Po’s Center for History in Paris, on May 30-31, 2024, in English. Please use the link below to submit abstracts of a proposed 20-minute presentation along with a short biographical note.Deadline to apply is Sunday, December 31...opensocietyuniversitynetwork.org, 6d ago
The nonprofit said 35 companies and investor groups had signed the pledge, including Mayfield, General Catalyst, Felicis, Bain Capital, IVP, and Lux Capital. They are committing to five broad-strokes principles organized around the idea that "it is critical that startups incorporate responsible AI practices into product development from the outset."The nonprofit said it had developed the voluntary commitments with feedback from the Department of Commerce, as well as AI experts in the private sector, academia and civil society.The principles aim to "secure organizational buy-in on responsible AI," require transparency from companies about their use of AI, and convince them to plan ahead about the risks and benefits of using the technology. The requirements also call for product safety testing, as well as for companies to "make regular and ongoing improvements."Not everyone in Silicon Valley welcomed the latest framework with open arms. Marc Andreessen, founding partner of the venture capital firm Andreessen Horowitz and a pioneer in the creation of web browsers, reposted an announcement about the framework on X (formerly Twitter), writing "Absolutely not."Andreessen had previously released his own lengthy Techno-Optimist Manifesto, along with other statements such as one titled, Why AI Will Save the World.As with other AI principles released recently, the set announced Tuesday is laid out in general terms and represents a voluntary commitment with no clear enforcement mechanism. Experts and lawmakers have avoided making specific pronouncements about where AI may not belong, such as in elections advertising, for example."AI is the defining technology of our generation," Raimondo said in a statement before the event. "Voluntary commitments like the protocol announced today demonstrate important leadership from the private sector."The protocols come after U.S. Secretary of Labor Julie Su told the Chronicle earlier this month that organized labor could be a key bulwark against any labor market disruptions driven by AI technology. The technology has the potential, among other applications, to displace call center agents with chatbots, while the battle over self-driving vehicles and the Teamsters union has already reached the California Legislature.President Joe Biden also released a broad executive order last month regulating artificial intelligence and its developers. The administration requires creators of the most powerful AI tools to submit their technology for safety testing, and sets out plans for government use of the technology, as well as how it can be used in workplaces, schools and a range of other settings.That order was received positively in some quarters of the tech industry, with tech lobbying group TechNet saying it would "strengthen America's AI leadership."California Gov. Gavin Newsom also released an executive order of his own in September, focusing among other things on how the emerging technology might be used by various state agencies to improve the services they deliver.The Biden order also required departments to appoint AI czars, something Su told the Chronicle the labor department is working on.It was not clear if Raimondo, who is attending this week's APEC summit in San Francisco, had yet appointed an AI point person for her department. She did not take questions during her brief remarks to the press Tuesday.Governments the world over are concerned about powerful tools like OpenAI's GPT-4 chatbot, and their ability to potentially disseminate misinformation or more sinister applications like instructions to build weapons.© 2023 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.GovTech, 18d ago
HKSAR Government is dedicated to supporting startups and fostering a thriving startup ecosystem in Hong Kong The Festival was honored by the attendance of senior government officials who emphasised the Hong Kong Special Administrative Region (HKSAR) Government's dedication to creating a dynamic ecosystem through proactive initiatives and established funding schemes. At the opening ceremony of Game On! 2023, Michael Wong, Acting Financial Secretary of the HKSAR, expressed his delight in Hong Kong's fast-growing startup community. He said, "According to InvestHK's 2022 startup survey, we have around 4,000 startups employing 15,000 people, which is a new record high. We know that these numbers are continuously growing in 2023, demonstrating that Hong Kong has all the necessary elements to nurture and support startups." Professor Sun Dong, Secretary for Innovation, Technology and Industry of the HKSAR Government, further emphasised the Government's commitment to maintaining Hong Kong's long-term competitiveness in attracting global talent and supporting startups at the Startup World Cup (SWC) Asia Finale 2023. He stated, "Hong Kong has long been recognised as a world-class international financial centre, and we are striving to become an international innovation and technology hub, making our city an ideal destination for startups to thrive." Alpha Lau, Director-General of Investment Promotion at InvestHK, echoed this sentiment at the Real Estate Beyond 2023. She stated, "Despite the ongoing challenges posed by macroeconomic uncertainties, Hong Kong's startup ecosystem has shown remarkable resilience and growth. Furthermore, the HKSAR Government has proactively implemented a series of strategies and policies, with an aim to attract large companies, startups, international talent, and capital to Hong Kong." Sustainability emerged as a prominent theme across all sectors While the StartmeupHK Festival covered a diverse range of topics, including web3, healthtech, proptech, greentech, and AI, sustainability emerged as a prominent theme across all sectors. During his keynote speech at the St. Gallen Symposium Hong Kong - GBA Forum 2023, Bernard Chan, Chairman of Our Hong Kong Foundation, emphasised the importance of companies incorporating sustainable business practices into their business models and expressed Hong Kong's openness to collaboration within the region to achieve this common goal. In a panel discussion at the same event, Professor Christine Loh, Chief Development Strategist at the Hong Kong University of Science and Technology, pointed out the importance of understanding how family offices identify potential investments in driving capital towards sustainable businesses. At a panel discussion during the 1.5°C Summit - The Defining Decade for Impact with Tech, speakers unanimously agreed that sustainability is not just an option but a necessity. Alexander Bent, Co-Founder/ Managing Partner of Undivided Ventures, stated that organisations must invest in socially responsible companies and address these problems to sustain themselves in the next decade. During the Real Estate Beyond 2023 event, Arthur Lam, Founder of Negawatt, highlighted the importance of monitoring ESG impact progress and shifting focus towards sustainability for business success. On the same occasion, Andrew Young, Associate Director (Innovation) at Sino Group, noted that property developers in Hong Kong are under pressure to meet decarbonisation goals and are open to collaborating with proptech startups. AI and web3: the next big things During a panel discussion at the Asia Health Innovation Summit, Dr Frank Pun, Head of the Hong Kong Office at Insilico Medicine, highlighted the potential of generative AI in generating experimental data to aid patients, particularly in cases where the disease is unknown or new. Jirayut (Topp) Srupsrisopa, Founder and Group CEO at Bitkub Capital Group Holdings, shed light at Game On! 2023 that web3 will be a major force in the future of Hong Kong, and the digital economy will be enormous. On the same occasion, Yat Siu, Co-Founder and Executive Chairman at Animoca Brands, expressed that Hong Kong has made significant progress in nurturing innovation and technology in the past years and he saw that the city has the potential to take on a leadership role as a global web3 hub. Joe Tsai, Chairman of Alibaba Group, also shared his insights on the future of work and leadership in the era of AI and new technology at JUMPSTARTER 2023 Tech by The Harbour. He stated, "We have entered the 'post-GPT' era, where computers process knowledge, not just data. However, I believe that humans will excel in emotional areas. As business leaders, it is important to hire smart individuals to avoid limiting a company's potential." Insightful discussions generated from the community events In addition to the main events, the Festival also hosted various community events that sparked insightful discussions. At the LOUDER Connect, Jayne Chan, Head of StartmeupHK at InvestHK, spoke on the need to support female founders and investors, but highlighted Hong Kong's strong position for its number of female-founders. Other female startup leaders echoed that Hong Kong boasts an abundance of talent, resources, and investors from both the public and private sectors, which enable the independent nurturing and growth of female-led startups. At another event, Shaping Legacies: Impact Investing Unveiled for Family Offices, a representative from a family office highlighted one of Hong Kong's greatest advantages: there is no expiration date for establishing family trusts in Hong Kong. The panelists also discussed the trend of impact investing, where family offices are increasingly focused on recycling their capital with a positive impact on society. Pitching competitions offered platform for startups to showcase innovative solutions to potential investors A number of scalable pitching competitions were held throughout the Festival, offering startups the opportunity to showcase their ideas and connect with like-minded individuals and potential investors. Notably, the "Shark Tank"-style pitch competition held at "Shark Mystique" during the Explore the Innovation Ocean, empowered the younger generation to showcase their innovative ideas. Other highlights included the Startup World Cup Asia Finale, where i2cool, a green and energy-efficient service provider with passive radiative cooling technology from Hong Kong, emerged as the champion. Moreover, Allegrow Biotech, a Hong Kong-based biotechnology startup was crowned grand finale winner of JUMPSTARTER 2023 for its mission to revolutionise the healthcare landscape by bringing high-quality and cost-effective cell therapeutic products to patients. Abundant networking offered invaluable opportunities for startups, entrepreneurs, investors, and innovators to connect This year's Festival went beyond traditional networking events in connecting startups and investors. Some highlights included an investor-matching on the Ferris Wheel at Explore the Innovation Ocean, as well as the "StartMeetUp" session at the JUMPSTARTER 2023 Tech by The Harbour, which offered exclusive and prearranged business matching sessions. The StartmeupHK Festival 2023 concluded on 17 November, leaving a lasting impact on the global startup community. This sustained momentum is expected to foster a healthy startup ecosystem in the city where groundbreaking ideas can flourish, ensuring that the spirit of entrepreneurship remains vibrant. Hashtag: #smuhkfest2023 #startmeuphk #investhk...SME Business Daily Media, 6d ago
Yeah. So I believe very strongly, that we will have a lot more automated decision making in lending. It’s not to say that certain decisions won’t still require manual review or won’t still require a second set of eyes, but automated decisioning needs to proliferate further than it already has. And that’s going to happen across different product lines. But what I think is really important, and this goes to the future of AI and credit and other places, is that the types of systems that are going to win, that are going to provide the most value to customers are systems that allow for input from ultimately multiple sources. So that could be data as one source, but also humans, who…Machine learning is really good at eating data and finding insight. Humans are really great at applying context to that data, information that is outside of the data elements. So I believe if you will, the AI of the future, especially for regulated use cases, but I think it for other use cases as well as the public awareness of AI system grows as we get new regulation likely coming over and kind of following a lot of the regulation that we’ve seen in Europe, and we’ve already seen the initial stride with that with 1033, there’s going to be a real focus on how do I understand what is happening, not just from data, but also from people? Combine those two into one automated system, and ensure that I can tell the FI, or the other type of business can tell their customer on the other side, what the heck happened? How was this decision made? What information was used? How can I help you get to a different decision, which I continue to believe is a huge opportunity for a case where you have a negative outcome? How do you build a relationship with that customer to help them get to a positive outcome? You know, it’s going to be it’s going to be AI systems that can do that, that are going to actually deliver on all of the promise and all of the value that we hear about in all the newspapers.Zephyrnet, 11d ago
new KAREN HAO: One of the things, just to take a step back before we kind of go through the, the tumultuous history leading up to this point, one of the things that's kind of unique about OpenAI, I mean you see this in a lot of Silicon Valley companies, but OpenAI does this more than anyone else I would say, which is they use incredibly vague terms to define what they're doing. Artificial general intelligence, AGI, this term is not actually defined. There's no shared consensus around what AGI is and of course there's no consensus around what is good for humanity. So if you're going to peg your mission to like really, really vague terminology that doesn't really have much of a definition, what it actually means is it's really vulnerable to ideological interpretation. So I remember early in the days of OpenAI when I was covering it, I mean people would joke like if you ask any employee what we're actually trying to do here and what AGI is, you're gonna get a different answer. And that was, that was sort of almost a feature rather than a bug at the time in that they said, "You know, we're on a scientific journey, we're trying to discover what AGI is." But the issue is that you actually just end up in a situation where when you are working on a technology that is so powerful and so consequential, you are going to have battles over the control of the technology. And when it's so ill-defined what it actually is, those battles become ideological. And so through the history of the company, we've seen multiple instances when there have been ideological clashes that have led to friction and fissures. The reason why most people haven't heard of these other battles is because OpenAI wasn't really in the public eye before, but the very first battle that happened was between the two co-founders, Elon Musk and Sam Altman. Elon Musk was disagreeing with the company direction, was very, very frustrated, tried to take the company over, Sam Altman refused. And so at the time Elon Musk exited, this was in early 2018 and actually took all of the money that he had promised to give OpenAI with him. And that's actually part of the reason why this for-profit entity ends up getting constructed because in the moment that OpenAI realizes that they need exorbitant amounts of money to pursue the type of AI research that they wanna do is also the moment when suddenly one of their biggest backers just takes the money. The second like major kind of fissure that happened was in 2020, and this was after OpenAI had developed GPT-3, which was a predecessor to ChatGPT. And this was when they first started thinking about how do we commercialize, how do we make money? And at the time they weren't thinking about a consumer-facing product, they were thinking about a business product. So they developed the model for delivering through what's called an application programming interface. So other companies could like rapidly build apps on GPT-3. There were heavy disagreements over how to commercialize this model, when to commercialize the model, whether there should be more waiting, more safety research done on this. And that ultimately led to the falling out of one of the very senior scientists at the company, Dario Amodei, with Sam Altman, Greg Brockman, and Ilya Sutskever. So he ended up leaving and taking a large chunk of the team with him to found what is now- one of open AI's biggest competitors, Anthropic.Big Think, 2d ago

Latest

new Rane and Sethi are also working to ensure that AI specifically understands the Indian markers. For instance, Rane says how carbon pigment is mostly present in tumours of the lung in an Indian patient, owing to pollution and smoking, but not so much in tumour samples of a patient from the west. But given that data is at the core of AI, India is at a loss because of our dismal data collection and storage capacity. “Electronic medical records are present in only 15 per cent of large cancer hospitals in India,”says Rane. “Many have no electronic record and do not have the infrastructure to digitise radiology and pathology. So then how do we apply AI in these hospitals [when there is] lack of data? We are still behind the west in terms of adopting electronic records. We need large Graphics Processing Unit clusters but the investment required for that is [not coming]. We also need higher funding in innovation in India.”...theweek.in, 1d ago
new Introducing Masterpiece X, the world’s first fully generative AI text-to-3D app. Take asset creation to new heights with this advanced toolkit, enabling optimized mesh models, textures and animations like never before. Customers don’t need to be an inspired 3D artist to get started – just use a few simple words and let the technology take over from there. This cutting-edge tool not only gives professionals a new level of flexibility when creating content but also reduces project turnaround times significantly. With Masterpiece X can quickly explore options and ideas in 3D, all while achieving stunning results. Let imagination come alive through the power of technology! With its intuitive interface and powerful AI algorithms, Masterpiece X takes the hassle out of 3D asset creation. Simply input desired concept, and the app does the rest, generating a fully-rendered 3D model that is as realistic as it is awe-inspiring. From intricate architectural designs to lifelike character animations, there is virtually no limit to what can achieve with this revolutionary technology at fingertips.But Masterpiece X is not just about saving time and effort – it's about unlocking the true potential of creative vision. Whether an architect visualizing a new skyscraper, a game developer crafting immersive worlds, or a filmmaker breathing life into characters, this app is ultimate tool for pushing the boundaries of what is possible in the realm of 3D design.Join the ranks of leading professionals who have already embraced the power of Masterpiece X and witnessed its transformative capabilities. With this game-changing app, can truly stand out from the crowd and elevate work to new heights.Invest in artistic journey and let Masterpiece X be trusted companion as embark on a quest for innovation. It's time to take creations to the next level – the level where imagination becomes reality. Get ready to amaze, inspire, and leave a lasting impression with Masterpiece X.saasworthy.com, 1d ago
new AI and regulated industries: “One of the surprises to me over the past 12 months are the regulated industries. These are industries that potentially don’t have the best reputation for being in the vanguard of technology: insurance, financial services, health care, life sciences, all these sorts of things. But in an ironic twist, all of the regulations that they’ve been working through over the past 20-30 years have been around data privacy, and data governance, and data standards, and structured data quality. They’re all the things you need to have in place to successfully apply your own data to generative AI. … For a very small incremental investment, they can start applying that data with generative AI really, really quickly; really, really easily.”...GeekWire, 1d ago
new Navigating the realm of AI in security strategies involves nuanced considerations. Cybersecurity necessitates distinctive approaches for preemptive attacks while avoiding excessive restrictions to promote innovation and economic growth. Schmidt acknowledges the complexity of imposing mutual restraints on AI-based weaponry due to unpredictable scope and engagement. He recommends that nuclear powers establish common protocols to ensure nuclear stability, drawing from a realist perspective that places national interest at the forefront.Montreal AI Ethics Institute, 1d ago
new The findings from this study are crucial for academia and go well beyond that, touching the critical realm of AI Ethics and safety. The study sheds light on the Confidence-Competence Gap, highlighting the risks involved in relying solely on the self-assessed confidence of LLMs, especially in critical applications such as healthcare, the legal system, and emergency response. Trusting these AI systems without scrutiny can lead to severe consequences, as we learned from the study that LLMs make mistakes and still stay confident, which presents us with significant challenges in critical applications. Although the study offers a broader perspective, it suggests that we dive deeper into how AI performs in specific domains with critical applications. By doing so, we can enhance the reliability and fairness of AI when it comes to aiding us in critical decision-making. This study underscores the need for more focused research in these specific domains. This is crucial for advancing AI safety and reducing biases in AI-driven decision-making processes, fostering a more responsible and ethically grounded integration of AI in real-world scenarios.Montreal AI Ethics Institute, 1d ago
new So when it was discovered that transformer-based systems like ChatGPT could turn casual human-readable descriptions into working code, there was much reason for excitement. It’s exhilarating to think that, with the help of generative AI, anyone who can write can also write programs. Andrej Karpathy, one of the architects of the current wave of AI, declared, “The hottest new programming language is English.” With amazing advances announced seemingly daily, you’d be forgiven for believing that the era of learning to program is behind us. But while recent developments have fundamentally changed how novices and experts might code, the democratization of programming has made learning to code more important than ever because it’s empowered a much broader set of people to harness its benefits. Generative AI makes things easier, but it doesn’t make it easy.Popular Science, 1d ago

Top

This presentation is dedicated to shedding light on Professional Ethics within the realm of AI Technology. It will present an overview of the IEEE's approach to AI Ethics, emphasizing the significance of the CertifAIEd program, which is at the forefront of promoting ethical practices in the AI domain. The session will also delve into the strategies and approaches adopted in various jurisdictions worldwide, offering a brief analysis of the prevailing trends in AI ethics. The central message of this presentation is to underscore the pressing need for a universal code of AI ethics. It aims to emphasize the importance of universally adopted principles that guide and govern independent verification of compliance across the globe. As AI becomes an integral part of our daily lives, it is imperative to develop a shared ethical framework that ensures responsible AI development and deployment while respecting diverse cultural and legal perspectives.bcs.org, 19d ago
The department recently posted a new document: "Generative Artificial Intelligence (AI) in K-12 Classrooms." It draws from Artificial Intelligence and the Future of Teaching and Learning, a report released in May by the U.S. Department of Education’s (U.S. ED) Office of Educational Technology. The approach I think would be most efficient and prevent “wheel reinvention” is to use Oregon’s new report as a guide to asking the right questions and making sure all issues are being addressed. This will be useful for those starting anew or wanting to simplify their current process.One theme Oregon adopted from the U.S. ED’s report that is worth thinking about is the senior role of educators in both the use of, and the policies regarding, AI. Oregon notes that the U.S. ED report “puts an emphasis on developing people rather than machine-centered policies by keeping humans in the loop when using AI. They use the following metaphor to describe its use noting that ‘teachers, learners and others need to retain their agency to decide what patterns mean and to choose courses of action.’“Oregon writes:"We envision a technology-enhanced future more like an electric bike and less like robot vacuums. On an electric bike, the human is fully aware and fully in control, but their burden is less, and their effort is multiplied by a complementary technological enhancement. Robot vacuums do their job, freeing the human from involvement or oversight."This resource developed by ODE as well as any future resources align with this metaphor in that whenever using AI (or any educational technology in the classroom) it is essential that educators are the decision makers and their knowledge and expertise is central." [bold in the original document]For educators who wonder about the focus on generative AI when AI’s use in education is much broader, the Oregon report explains:“[T]his document focuses on AI applications that are generative in nature — referred to herein as ‘generative AI.’ This includes programs like ChatGPT, Bard and other chatbots that use AI and natural language processing (NLP) to provide humanlike responses to questions. The field of AI encompasses far more than just generative AI. However, given the rapid emergence of chatbots like ChatGPT and Bard in the field of education, this resource focuses solely on this application of AI. It is important to acknowledge that AI is growing at a rapid pace and additional platforms and resources will continue to be developed.”I think educators and administrators alike will find the Oregon report very useful. It...GovTech, 24d ago
Coaching and TrackingThe app's main functionalities include comprehensive athlete coaching across several sports and tracking athletes during workouts, play sessions, and socialized competitions. All activities within the app are gamified and incentivized, creating a motivating and engaging training environment .Targeting a Global Audience of Sports EnthusiastsActiquest aims to reach out to the 1 billion people worldwide who love sports and occasionally seek training guidance. This wide target audience underscores the app's potential to democratize access to quality sports coaching .Launch and Market StrategySet for completion in early 2024, with a focus on play sports, Actiquest's app will mark a significant milestone in the app's journey. The company has already seen success, collaborating with famous coaches and sport schools to create white-label apps. These collaborations have helped train the AI engine and validate the product-market fit .Transforming Sports Training with AIActiquest envisions a future where AI technology dramatically changes the sports industry. By providing autonomous AI sport coaches and a multi-sport approach, the company aims to make high-quality coaching accessible to everyone. This approach is not only cost-effective but also offers a more personalized and available alternative to traditional human coaching .The app utilizes advanced pose estimation technology to track and analyze coach movements, creating a adjustable 'coach digital twin'. This model serves as a reference for athletes, allowing them to understand and mimic techniques accurately. The integration of a Large Language Model (LLM) extends the app's functionality, enabling it to provide real-time feedback, personalized training programs, injury prevention guidance, and psychological support. This comprehensive approach ensures that the app meets the nuanced needs of athletes, making high-quality coaching more personalized and accessible than ever before .With Actiquest's innovative app, the future of sports training is here. Offering a unique blend of AI technology and human expertise, the app is poised to transform how athletes train, compete, and achieve their goals.To know more, please read news and articles at our LinkedIn page...openPR.com, 19d ago

Latest

new There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—ushering in a “post-work” world, wherein humans can sit around enjoying themselves while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the power struggle.Gizmodo, 2d ago
new The contract, which is being voted on and needs a majority of “yes” votes to come into force, was considered a victory by many and received with suspicion by others. In the initial vote, 14% of the members of the union's national board voted “no”, and that is what Portuguese actress Kika Magalhães intends to do as well. “The reason is that they don't protect actors in relation to digital replicas”, she told Lusa the actress, based in Los Angeles since 2016. “They say yes, that there is protection, but then we look between the lines and there is nothing.” Kika Magalhães, whose latest film, “The Girl in the Backseat”, has just been released reaching Amazon Prime Video and the streaming platform Tubi, points to how digital replicas can be disastrous. “An actor goes for a casting and the producers ask if he will accept their digital replica. If the actor says no, they may not give him the role,” she explains. Top-notch actors will be able to negotiate and say no without losing the role. “But small actors like us don't bring as much money to the union and they don't protect us as much”, considered Kika Magalhães. The actress doubts the solution put forward by one of the clauses, according to which if a studio uses digital replicas of an actor this You will be paid corresponding to the hours you would be filming. “This is very relative, because a scene can take a month to film. They can say it took a day to make.” Actress Justine Bateman also criticized loopholes that allow studios to use digital replicas without actors' consent when certain conditions are met. The results of the votes will be known on December 5th. If there are 50%+1 “yes” votes, this contract will come into force for the next three years. “I have heard many actors saying that they will vote no”, said Kika Magalhães. Her husband, actor Chris Marrone, said that “if the majority fully understands what they are signing, then they vote no.” Marrone considered that the SAG contract “doesn’t seem like a big victory after all” and that there should be specific language to define the actors as human beings. This is something that actress Katja Herbers also defends, in opposition to “synthetic actors”. However, the expectation is that the “yes” will win, because the industry has been at a standstill for too long and there is widespread fatigue. This is what Mário anticipates Carvalhal, who belongs to the Animation Guild, stressing that the stoppage was long and the “no” appears to be a minority. “There is a possibility that some people will vote no, but I believe that these new measures will pass and be approved,” he told Lusa. “I think it is a minority that is very right in what they are demanding, but it was practically a whole year of work stopped in this city and I think everyone is ready to move forward”. Mário Carvalhal considers that the big risk of AI will be the reduction in quality and a change in the way the environment works. “Actors have more to claim, especially when it comes to those who do voices. There have already been cases where AI can do the job,” he said. “It's an inferior job, but for many companies it's enough and doesn't cost them anything.” Carvalhal considers that actors “must maintain their rights to image, voice and everything else, their likeness.” The Portuguese also stressed that, although the strikes did not achieve all their objectives, they allowed “important steps in the right direction” to be taken and this is an aspect of which the strikers are proud. “As much as possible, I think the workers won this fight”, he considered. For screenwriter Filipe Coutinho, member of the Portuguese Cinema Academy, the unions were justified in their fight, which took longer than expected. “I'm quite satisfied for the way both the WGA and SAG acted over these six months”, he told Lusa. “It’s an unbelievable time to have an entire industry at a standstill,” he stressed. “California is one of the largest economies in the world and it is incomprehensible that it took so long for the studios to offer a fair contract to writers and actors.” Filipe Coutinho also said that, even with the agreements, “everything is a little upside down. the air”, with studios and production companies “trying to understand what the next phase will be”. The Portuguese mentioned changes in the business model, with 'blockbusters' expected to fail at the box office, cancellation of films and the dilemma of 'streaming '.“No one really knows what to invest in and under what conditions to invest, and now contracts also change the approach to content production.” Afonso Salcedo, lighting artist, who worked on the new Disney film “Wish – The Power of Desires”, considers that the strikes were difficult but important, at a time when it is not yet clear to what extent AI will affect the industry. “The agreements will last three years so I think it is a good step to see what it is like that the technologies will work in the coming years”, he indicated, noting that the animation segment will have to renegotiate the contract in 2024. “It will be interesting to see what will happen, if we are going to negotiate protections against Artificial Intelligence”, stated Afonso Salcedo. “Maybe, next year, we will get into these fights with the studios again.” The vote on the agreement reached between the SAG-Aftra union and the studios runs until December 5th. The results will be tabulated and published on the same day.adherents, 2d ago
new Ralph Ranalli (Intro): Welcome to the Harvard Kennedy School PolicyCast. I’m your host, Ralph Ranalli. When ChatGPT and other generative AI tools were released to the public late last year, it was as if someone had opened the floodgates on a thousand urgent questions that just weeks before had mostly preoccupied academics, futurists, and science fiction writers. Now those questions are being asked by many of us—teachers, students, parents, politicians, bureaucrats, citizens, businesspeople, and workers. What can it do for us? What will it do to us? Will it take our jobs? How do we use it in a way that’s both ethical and legal? And will it help or hurt our already-distressed democracy? Thankfully, my guest today, Kennedy School Lecturer in Public Policy Bruce Schneier has already been thinking a lot about those questions, particularly the last one. Schneier, a public interest technologist, cryptographer, and internationally-known internet security specialist whose newsletter and blog are read by a quarter million people, says that AI’s inexorable march into our lives and into our politics is likely to start with small changes like AI’s helping write policy and legislation. The future, however, could hold possibilities that we have a hard time wrapping our current minds around—like AIs creating political parties or autonomously fundraising and generating profits to back political parties or causes. Overall, like a lot of other things, it’s likely to be a mixed bag of the good and the bad. The important thing, he says, is to using regulation and other tools to make sure that AIs are working for us—and even paying us for the privilege and not just for Big Tech companies—a hard lesson we’ve already learned through our experience with social media. He joins me today.harvard.edu, 2d ago
new Section 4 argues that pretty much the whole essay would need to be thrown out if future AI is trained in a substantially different way from current LLMs. If this strikes you as a bizarre unthinkable hypothetical, yes I am here to tell you that other types of AI do actually exist, and I specifically discuss the example of “brain-like AGI” (a version of actor-critic model-based RL), spelling out a bunch of areas where the essay makes claims that wouldn’t apply to that type of AI, and more generally how it would differ from LLMs in safety-relevant ways.alignmentforum.org, 2d ago
new One notable side effect of the advances of generative AI is the evolution of fraud in the form of deepfakes. These will pose a growing threat to biometric processes, such as those used in identity verification. This includes both presentation and injection attacks. According to market experts, presentation attacks using deepfakes are roughly 10 to 100 times more common than injection attacks. In 2024, it will become ever more important for banks and financial service providers to rely on remote identity verification processes that validate the user’s face biometrics and perform liveness checks to detect and prevent presentation attacks and tackle the rising number of fraud attempts. Learning more about the user’s device and where it comes from, securing the communication between camera and application, and the usage of NFC (near field communication) technology will all start to play bigger roles in the fight against fraud.Financial IT, 2d ago
new Ammanath: My advice for companies is to first define the principles you want to adhere to. Whether building AI in house or sourcing it from a vendor, companies need to first define the principles they want to align to and run the technology across that set of defined principles. There are questions that can then be asked about those principles to ascertain whether the technology follows those principles. When these principles are defined upfront, companies can get a lot clearer on what is acceptable and what is not acceptable, and measure the consequences and outcomes of applying emerging technologies.Financier Worldwide, 2d ago

Top

AI Snake Oil Blog: “Foundation models such as GPT-4 and Stable Diffusion 2 are the engines of generative AI. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies like social media. How are these models trained and deployed? Once released, how do users actually use them? Who are the workers that build the datasets that these systems rely on, and how much are they paid? Transparency about these questions is important to keep companies accountable and understand the societal impact of foundation models. Today, we’re introducing the Foundation Model Transparency Index to aggregate transparency information from foundation model developers, identify areas for improvement, push for change, and track progress over time. This effort is a collaboration between researchers from Stanford, MIT, and Princeton. The inaugural 2023 version of the index consists of 100 indicators that assess the transparency of the developers’ practices around developing and deploying foundation models. Foundation models impact societal outcomes at various levels, and we take a broad view of what constitutes transparency…Execution. For the 2023 Index, we score 10 leading developers against our 100 indicators. This provides a snapshot of transparency across the AI ecosystem. All developers have significant room for improvement that we will aim to track in the future versions of the Index…Key Findings...bespacific.com, 19d ago
There are innumerable possibilities of how AI can be used. But rather than start there, organizations should take a look at what their articulated strategy already is. Depending on what’s most important to accomplish, AI likely already has a role. Standing up governance and guiding principles of AI that are aligned to values is harder than it sounds because it also requires alignment on who should be at the table. One important group to include at the start is ethicists to help tackle how complex questions of autonomy, rights, and equity will be prioritized in any solution. To the extent possible, there needs to be clear and specific guidance for patients to understand how their data is being used now and in the future. And to carry it forward a bit, informed consent. We’re not talking about an “accept here” button with terms and conditions being fifty pages long that only a small number of individuals will understand. Ensuring information is accessible for all patients – in native languages received in the communication channel of their choice – builds trust among patients and their care team, and simply put, is the right thing – and the ethical way to do it.MedCity News, 24d ago
Advancing Cryptography and Encryption in 2024 – In 2024, we expect to see cryptography and encryption research continue to explore new ways to safeguard data, both at rest and in the cloud. The evolution of advanced encryption systems, like ABE (attribute-based encryption), presents an intriguing prospect for real-world adoption. However, concerns of privacy remain due to the absence of assured privacy in interactions with AI models. As these interactions may involve even more sensitive information than conventional search queries, it’s conceivable that researchers will delve into the prospect of enabling private engagements with such models.“One potential area of interest across the cryptography research community is to expand private search queries to encompass private interactions with AI systems,” said Dr. Brent Waters, Director of the Cryptography & Information Security (CIS) Lab, NTT Research. “The rapid rise and utility of large language models like ChatGPT has transformed various industries. However, privacy concerns could be holding back the potential of these technologies. I imagine that the research community will examine the possibility of having private interactions with these types of AI technologies.”With the advancement of technologies such as artificial intelligence and quantum computing, 2024 will be the year that organizations implement and innovate through technology. Not only will businesses implement a Zero Trust strategy as a baseline cybersecurity practice, but they will also begin to capitalize on advanced cybersecurity technologies made possible through fundamental research and R&D such as ABE to safeguard their business, data and preserve privacy.NTT Research -, 24d ago
...“AI and Generative AI is extremely important for all of us moving forward in the technology ecosystem. I firmly believe it will bring a lot of value to society, to businesses, and to all major industry verticals, it’s a revolutionary technology. However, there remains a lot of myths and misperceptions about AI in terms of what it can and can’t do. I do think the biggest issue, or concern being raised is in relation to the regulation of AI. I would encourage all the regulatory authorities across the world to come together, and implement the right practices and frameworks that are needed to make sure Gen AI is used responsibly. It can be done, in Europe, GDPR has been a great success and shows what can be achieved when governments work together, and that data protection framework is so important, and it was needed, and it’s the same with AI, we need proper regulations. The benefits and potential of AI is enormous, but again I must stress that it has to be regulated properly,” said ElShimy.TahawulTech.com, 14d ago
What is potentially most challenging in recruiting “AI talent” is identifying the actual skills, capacities, and expertise needed to implement the EO’s many angles. While there is a need, of course, for technological talent, much of what the EO calls for, particularly in the area of protecting rights and ensuring safety, requires interdisciplinary expertise. What the EO requires is the creation of new knowledge about how to govern—indeed, what the role of government is in an increasingly data-centric and AI-mediated environment. These are questions for teams with a sociotechnical lens, requiring expertise in a range of disciplines, including legal scholarship, the social and behavioral sciences, computer and data science, and often, specific field knowledge—health and human services, the criminal legal system, financial markets and consumer financial protection, and so on. Such skills will especially be key for the second pillar of the administration’s talent surge—the growth in regulatory and enforcement capacity needed to keep watch over the powerful AI companies. It’s also critical to ensure that these teams are built with attention to equity at the center. Given the broad empirical base that demonstrates the disproportionate harms of AI systems to historically marginalized groups, and the President’s declared commitment to advancing racial equity across the federal government, equity in both hiring and as a focus of implementation must be a top priority of all aspects of EO implementation.Brookings, 18d ago
Artificial Intelligence is rife with contradictions. It is a powerful tool that is also surprisingly limited in terms of its current capabilities. And, while it has the potential to improve human existence, at the same time it threatens to deepen social divides and put millions of people out of work. While its inner workings are highly technical, the non-technical among us can and should understand the basic principles of how it works - and the concerns that it raises. As the influence and impact of AI spread, it will be critical to involve people and experts from the most diverse backgrounds possible in guiding this technology in ways that enhance human capabilities and lead to...World Economic Forum, 28d ago

Latest

new INFRASTRUCTURE opportunities centered on increasing access to composting/food scrap hauling services. Increasing the number of free drop off locations located at farmers markets, gardens, churches, public parks, and schools was mentioned numerous times possibly indicating a belief that this might be the easiest or quickest approach to the adoption of composting if “ drop-off sites [were] in every neighborhood .” More gardens could also host on-site community composting if they had money available to pay stewards. Increasing municipal collection ( "Bins for All!") and community composting efforts and expanding options for home composting and for renters were all mentioned as options to increase access as was the development of a compost (service) directory. Regardless of how access to composting services was increased it should be affordable as “[you] shouldn't have to pay to do right thing.” Opportunities to utilize more appropriate technology abound including using AI for sorting to reduce contamination, developing rat-proof bins (for collection and home composting), using biodigesters, and perhaps creating a new type of facility to handle (plastic-based) ‘compostable’ (biodegradable) serve-ware. To support the development of infrastructure opportunities incentives such as "raising landfill tipping fees [or creating a] methane tax on private LF owners" and funding such as the "potential to pursue Federal funding to pilot creative programs" are critical. Increasing collaboration and coordination is also an opportunity to develop infrastructure with at least one participant wondering, “ Where is regional planning org? (e.g. CMAP)” and now is the time to increase collaboration because "passion exists among food producers in the region who need soil!!" Rather than looking to individual households for new sources of materials to make compost, restaurants, breweries, and landscapers and other large food waste generators could be better engaged and plugged in to composting perhaps driving or at least raising awareness of the issue for individual households. Attendees also mentioned as opportunities new places to process and acquire compost or new ways to handle food waste that included empty warehouses and factories, vacant lots, big box stores offering pickup of compost and drop off of scraps and sending food waste from large institutions to hog farms 4 . A list of comments on Opportunities in Infrastructure are displayed in Table 8, Appendix 1. POLICY opportunities included creating mandates for large generators (such as grocery stores, food processors, schools and universities) to compost and for public projects (roads, landscaping, buildings) to use finished compost. These two policy changes have the power to directly increase demand and may work to create a consistent end market for finished compost. There may exist a scalar mismatch in the region for compost— demand by individual consumers may be too small to justify the amount of effort it takes to create the compost so there needs to be "more emphasis on getting more finished compost to market." Other opportunities to increase demand for the finished product and access to services include passing Right to Compost legislation “so that building managers and landlords don't stand in the way" of their residents using compost pick up services; instituting bans and creating incentives and/or disincentives such as increasing landfill tipping fees ; and incentivizing compost usage by farmers and public works. Creating consistent, stable, and expanded end markets for finished compost and the support for processing infrastructure is extremely important when implementing policies that would increase the amount of feedstock available to create compost. Policy opportunities that did not require a large lift included creating new or expanding existing programs as "many municipalities already accept leaves & lawn clippings. Expanding to collecting food waste scraps is low- hanging fruit" and ensuring that funding is sufficient especially for creating technical assistance positions to support compost collection and processing. Also identified as an opportunity is to increase regional coordination...extension.org, 2d ago
new Those risks are compounded when considered against a catch phrase of our time – “data is the new oil” — underscoring data’s role as the critical resource in the 21st-century economy. To be absent from the data sets on which AI understands the world is to go missing entirely.And lest we allow ourselves to think there will be a break in this exponential growth, Hartmut Neven of the Quantum Artificial Intelligence Lab at Google would remind us of a law named for him — Neven’s law — which says that quantum computers are gaining computational power at a doubly exponential rate and, while a long way off, has the potential to dwarf Moore’s law. Quantum computing raises the spectrum of sentient AI that would eventually come to think and feel like humans.Indigenous Canadian writer Alicia Elliott is not working on a law per se but on reinventing the Haudenosaunee creation story. Her people know what it means to wrestle back their culture and texts after losing them to outside forces. Looking to the future, she says there is nothing more important than to define and defend what it means to be human. Policymakers would do well to consider all that that means.GovTech, 2d ago
new Introducing Landing AI – the groundbreaking solution that takes the complexity out of building computer vision systems. Say goodbye to hours of tedious coding and programming, and say hello to a world where creating cutting-edge technology is as simple as having a conversation.Landing AI puts the power in hands, empowering the customer to create jaw-dropping computer vision systems in mere minutes. This intuitive interface guides through the process, eliminating any guesswork and providing with a seamless experience.With Landing AI, the possibilities are endless. Were have made it the mission to democratize AI, ensuring that anyone, regardless of their technical background, can harness the power of computer vision.Whether customers in the field of healthcare, manufacturing, retail, or any other industry, Landing AI has the tools that need to revolutionize work. Unlock new insights, automate processes, and stay one step ahead of the competition with this state-of-the-art computer vision technology.Join the ranks of forward-thinking professionals who are already leveraging the power of Landing AI to transform their businesses. Discover the possibilities that await and embrace a future where innovation knows no bounds. Don't let complex coding hold back – take the leap and experience the magic of Landing AI today.saasworthy.com, 2d ago

Latest

new The AI floodgates opened in 2023, but the next year may bring a slowdown. AI development is likely to meet technical limitations and encounter infrastructural hurdles such as chip manufacturing and server capacity. Simultaneously, AI regulation is likely to be on the way.This slowdown should give space for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful.ChatGPT and other generative AI systems will settle into people’s workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools.But the outlook for 2024 isn’t completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn’t likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content.Political misinformation spread across social media in 2016 as well as in 2020, and it is virtually certain that generative AI will be used to continue those efforts in 2024. Even outside social media, conversations with ChatGPT and similar products can be sources of misinformation on their own.As a result, another lesson that everyone – users of ChatGPT or not – will have to learn in the blockbuster technology’s second year is to be vigilant when it comes to digital media of all kinds.Tim Gorichanaz, Assistant Teaching Professor of Information Science, Drexel UniversityThis article is republished from The Conversation under a Creative Commons license. Read the original article.GovTech, 2d ago
new According to NI, the objective goal is survival of the population, while AI's goal should be to extend and maximize human capability—AI should complement the human brain. AI's position ultimately comes down to the idea that all disciplines require creativity, and AI is "more than happy working for science… especially in the area of generating big and longitudinal data for machine learning."...phys.org, 2d ago
new The cautious yet optimistic adoption of these technologies by cities like Boston and states like New Jersey and California signals a significant shift in the public-sector landscape.The journey from skepticism to the beginnings of strategic implementation reflects a growing recognition of the transformative potential of AI for public good. From enhancing public engagement through sentiment analysis and accessibility to optimizing government operations and cybersecurity, generative AI is not just an auxiliary tool but a catalyst for a more efficient, inclusive and responsive government.However, this journey is not without its challenges. The need for transparent and accountable technologies, responsible usage, constant vigilance against potential misuse, and the importance of maintaining a human-centric approach in policymaking are reminders that technology is a tool to augment human capabilities, not replace them.With responsible experimentation and a commitment to continuous learning, governments can harness the power of generative AI to reshape how they deliver public services. The future of governance is being rewritten, and it's up to us to ensure that this story is one of progress, inclusivity and enhanced public welfare.Beth Simone Noveck is a professor at Northeastern University, where she directs the Burnes Center for Social Change and its partner projects, the GovLab and InnovateUS. She is core faculty at the Institute for Experiential AI. Beth also serves as chief innovation officer for the state of New Jersey. Beth’s work focuses on using AI to reimagine participatory democracy and strengthen governance, and she has spent her career helping institutions incorporate more participatory and open ways of working.GovTech, 2d ago
new He pointed to the real-world example of this disparity in the digital equity space. While Denver’s digital equity needs are primarily centered around affordability of Internet service, other parts of the state face affordability and connectivity issues.Because of the collaborative relationships he built at the local level, Edinger feels well positioned to work with municipalities to advance technology use in different areas — including expanding connectivity and developing AI policy.Another major area he plans to focus on is accessibility. The state’s work in this space so far has included the launch of the Aira tool for Coloradans with low vision and a pilot program to help state workers better serve individuals with disabilities using VR, but there is more work to be done. Accessibility across state government is a critical piece to ensuring all constituents are served equitably, he said.And although Edinger expects emerging technologies to continue to impact government, he also underlined the importance of effective enterprise management systems. The technology working behind the scenes is an important foundation for any future innovation, he said.At the heart of Edinger’s vision for OIT is the people — employees and constituents alike. Empowering employees through things like process improvements, Lean, and other technology-enabled advances is the best way to make state government better, he said.“I will just say I am confident that investing in people and unlocking their potential is the way we will get there,” he said.The brief period of overlap between Edinger and Neal-Graves has helped enable a smooth transition for OIT to prepare the agency for future work under Edinger’s leadership. As Neal-Graves recently told Government Technology, the future for OIT is bright. He expects the state to continue advancing in digital government and digital equity.GovTech, 2d ago
...“This release is a game changer for protocol design committees and portfolio optimization teams. Today, these teams facilitate a painstaking negotiation process between the medical, commercial, operational, and regulatory stakeholders, that all need to weigh-in on the pros and cons of each protocol and portfolio decision.” said Orr Inbar, Co-Founder and CEO of QuantHealth. “With the power of QuantHealth’s generative AI, and now further enhanced by monte-carlo workflow, the Katina platform fires off thousands of protocols at once and allows each stakeholder to evaluate the simulations based on what matters to them most- be it endpoint success, ability to recruit patients, likelihood of getting an approval, competitive performance, etc. This high-throughput, holistic approach ensures that when it comes to protocol selection and development strategy, no stone is left unturned, and all voices in the room are heard.”...hitconsultant.net, 3d ago
Like all AI, a lot of what gendered AI does is the first draft, right. So this would be a good representation of first draft, it’s not the final, you still need to augment. And here’s the thing, market research, ideally should come first. Because if you have good market research, you can train the machine on that you can provide the raw interviews, the one on ones, the focus groups, and sparse primary representations of all that, and put it into this so that you have a more thorough, more complete machine, I would not have just as I would not start any kind of training data with synthetic data, except in this case, we have to protect PII. I would not start with synthetic data, I would start with real data. So I say the market research comes first. And then that can feed the modeling. So it’s the opposite direction of the way people were probably going to try it.Trust Insights Marketing Analytics Consulting, 3d ago

Latest

...“Artificial intelligence has spread into many areas of life, and defence is no exception. How it could revolutionise defence technology is one of the most controversial uses of AI today,” said committee chair Lord Lisvane, adding that while government recognition of the need for responsible AI in its future defence capabilities is welcome, there are a number of steps it must take to ensure appropriate caution is taken.ComputerWeekly.com, 3d ago
As we embrace this new era of medical innovation, it’s crucial to recognize that the journey with AI in health care is just beginning. The potential for AI to improve patient outcomes, enhance diagnostic accuracy, and streamline health care operations is immense. Yet, alongside these opportunities, we must vigilantly address challenges such as AI hallucinations, data security, and ethical considerations. By fostering a culture of innovation, collaboration, and continuous learning, we can unlock the full promise of AI in health care. The future is bright, and together, we can chart a course toward a more efficient, effective, and patient-centered health care system.KevinMD.com, 3d ago
It’s inadvisable to jump the gun and sign the rising star that promises to transform your operations without any game experience, just as it’s unwise to shell out cash to the household name with an illustrious legacy that’s becoming weak in the knees and slow to keep up. When selecting a provider of intelligent automation solutions, you need to prioritize both innovation and experience and most importantly, an understanding of and commitment to your organization’s needs. Haphazardly filling your tech stack with every new tool that promises to yield the best value from AI will create a cacophony of platforms, inhibiting efficiency – take care in selecting your solutions and choose those that have proven their worth in the context of the modern enterprise.ReadWrite, 3d ago
I love that JEB is run by a not-for-profit company, by scientists for scientists, focusing on curiosity-driven research and fostering scientific excellence while supporting the community. There have been so many changes in the publication landscape over the past couple of decades and, when the Open Science movement started out, I really thought, ‘Oh, this is the future’. But then so many for-profit Open Access journals started popping up, cluttering the publication landscape. Now, there's so much more noise and completely rubbish science being published, because it's not undergoing a rigorous review process overseen properly by scientific editors. Some Open Access journals are predatory and have made scientific publishing worse. At JEB, we care about our community, we care about supporting the authors and maintaining the integrity of the scientific process. Of course, we do need change in scientific publishing, but if you have not-for-profit journals run by scientists for the community, they will evolve their publication process in a way that meets the needs of the scientific community. They're not going to be chasing the next fad for profit, like some of the newer journals. I've published articles in some of the big Open Access journals and I've also observed how those journals have evolved over time, to the point where they have automated internal systems and become a ‘paper farm’. At some of these journals, the Editor doesn't select reviewers, because it's all automated by AI; it's even hard for an Editor to intervene in the article handling process to remove an offensive review or to reject a paper because of a fundamental scientific flaw. The system automatically assigns new reviewers until two reviewers accept an article for publication. Now, I refuse to interact with such journals. I will not review for them or submit papers, because I realize just how far they have taken the profit motive. The way forward is to support not-for-profit journals run by the scientific community for the community.The Company of Biologists, 3d ago
Zhang said he wrote the editorial, “Dialog between artificial intelligence & natural intelligence,” to prompt conversation among researchers and students. In it, he imagined a conversation between AI and natural intelligence (NI), in which the two debated their fundamental purposes and ultimate uses. According to NI, the objective goal is survival of the population, while AI’s goal should be to extend and maximize human capability — AI should complement the human brain. AI’s position ultimately comes down to the idea that all disciplines require creativity, and AI is “more than happy working for science… especially in the area of generating big and longitudinal data for machine learning.”...SCIENMAG: Latest Science and Health News, 3d ago
Introducing Voicemaker - the revolutionary solution that unleashes the full power of cutting-edge AI technology to create remarkable Text to Speech (TTS) voices like never before. Designed especially for professionals with a third person perspective, advanced platform allows effortlessly craft human-like voices that will captivate audience, leaving them in awe of the authenticity and quality that Voicemaker delivers.With Voicemaker, creative possibilities are limitless. Whether a content creator, voiceover artist, or business professional, comprehensive toolset empowers to infuse projects with a voice that resonates on a whole new level. Imagine seamlessly integrating an AI-generated voice into next marketing video, audiobook, e-learning module, or even virtual assistant application. Voicemaker enables to elevate work to a realm of professionalism and innovation previously unattainable.Unleash imagination and explore the vast array of features that Voicemaker has to offer. This intuitive user interface makes it effortless to navigate through a diverse range of voice options. Experiment with different accents, ages, and genders to find the perfect fit for project. Fine-tune voice creation by adjusting pitch, speed, and even adding natural pauses. A voice that reflects your vision and amplifies the impact of content.The commitment to excellence is reflected in the unparalleled quality of our TTS voices. Voicemaker harnesses state-of-the-art AI technology to generate voices that sound just like real humans. Embrace the future of voice technology with Voicemaker, where every syllable, every inflection, and every breath is meticulously crafted to deliver an authentic and engaging experience.When it comes to advanced AI technology, Voicemaker stands proudly at the forefront. The team of dedicated engineers and language specialists are continually pushing the boundaries of what's possible, ensuring that always have access to the latest advancements in voice generation. The believe in empowering professionals like to create groundbreaking work that makes a lasting impact - and Voicemaker is the tool that will transform ideas into reality. Embrace the power of AI and start crafting human-like voices that leave a lasting impression.saasworthy.com, 3d ago

Latest

Oregon Gov. Tina Kotek has signed an executive order establishing a new advisory council to develop a plan for ethical, transparent and inclusive AI use in Oregon government decision-making.Gov. Kotek signed the order Nov. 28, following in the footsteps of at least a half a dozen other states where governors have used their executive power to mandate some kind of AI action plan.“Artificial intelligence is an important new frontier, bringing the potential for substantial benefits to our society, as well as risks we must prepare for,” Kotek said. “This rapidly developing technological landscape leads to questions that we must take head on, including concerns regarding ethics, privacy, equity, security and social change. It has never been more essential to ensure the safe and beneficial use of artificial intelligence — and I look forward to seeing the work this council produces. We want to continue to foster an environment for innovation while also protecting individual and civil rights.”...GovTech, 3d ago
Goal: Future prosaic AIs will likely shape their own development or that of successor AIs. We're trying to make sure they don't go insane. There are two main ways AIs can get better: by improving their training algorithms or by improving their training data. We consider both scenarios and tentatively believe data-based improvement is riskier than architecture-based improvement. For the Supervising AIs Improving AIs agenda, we focus on ensuring stable alignment when AIs self-train or train new AIs and study how AIs may drift through iterative training. We aim to develop methods to ensure automated science processes remain safe and controllable. This form of AI improvement focuses more on data-driven improvements than architectural or scale-driven ones.alignmentforum.org, 3d ago
Why is safeguarding AI data and models crucial? As AI systems become more integral to business operations, the data they process and the models they refine become valuable intellectual assets. Protecting these assets is vital to maintaining a competitive edge, ensuring the integrity of AI operations, and safeguarding against malicious actors who could exploit weaknesses to gain unauthorized access or corrupt AI behavior. Effective security measures prevent data breaches that could lead to significant financial loss, reputational damage, and erode user trust. Join Carmen Kempka, Wibu-Systems’ Director Corporate Technology, at DevCamp to explore the intersection of AI and security, where we'll unravel strategies to protect your AI's core against emerging threats. Your AI's intelligence is only as strong as its shield. Let's fortify it together.wibu.com, 3d ago

Top

Third, everyone who has any role in the deployment of AI needs to be thinking about the ethical and even moral implications of the technology. Profit alone cannot be the only factor we optimize our companies for, or we’re going to create a lot of misery in the world that will, without question, end in bloodshed. That’s been the tale of history for millennia – make people miserable enough, and eventually they rise up against those in power. How do you do this? One of the first lessons you learn when you start a business is to do things that don’t scale. Do things that surprise and delight customers, do things that make plenty of human sense but not necessarily business sense. As your business grows, you do less and less of that because you’re stretched for time and resources. Well, if AI frees up a whole bunch of people and increases your profits, guess what you can do? That’s right – keep the humans around and have them do more of those things that don’t scale.Christopher S. Penn - Marketing Data Science Keynote Speaker, 8d ago
To manage data challenges, Operation Light Shine powers INTERCEPT teams with technology from Pathfinder Labs, a private vendor that provides document and media analysis, investigative training, communication analysis and other services to teams investigating online crimes. Pathfinder Labs uses MongoDB’s Atlas document database to aggregate massive files of data into a central database, called Paradigm, with cloud storage, allowing it to be sharable and searchable.Cole said the combination of AI technology and cloud storage capability puts important information in investigators’ hands quickly.“It uses artificial intelligence to say, ‘Based on the way that AI has been trained, I recognize that these were the sessions that meet the kinds of criteria that’s important to you, I’m going to highlight these and put these to the top, that’s probably where you should be more focused,’” said Cole. “From an investigative intelligence standpoint, it’s incredibly valuable and incredibly efficient, versus the manual ways you’d have to go through all of that data. It allows us to be more efficient, allows us to catch more offenders and allows us to rescue our kids.”The technology also uses natural language processing to highlight email addresses, phone numbers, and language patterns that could be exploitative in nature. Cole added that although many law enforcement agencies may be hesitant to consider using cloud computing for their investigative evidence, it can be a game changer.“The amount of data that we’re having to deal with in these cases, bare metal shelves are not scalable,” said Cole. “It’s more cost effective, in most cases, and more secure in most cases than trying to manage your own data file.”The INTERCEPT task forces are producing results. In June, an investigation by the northeast Florida chapter resulted in the conviction of two men for child sexual exploitation. The case involved more than 2,100 videos and 600 photos stored on several electronic devices.GovTech, 19d ago
School district officials are eager for policy guidance around how to protect student privacy with AI tools, deal with the possibility of students using the technology to cheat, and a host of other challenges. The federal government plans to step up to the plate to help, but it may take some time. A sweeping White House executive order on AI released Oct. 30 calls on the department to develop AI policies and guidance within a year.In crafting those resources, the department is "not looking for the perfect policy prescription that is a one size fits all for everybody," Rodríguez said.Instead, officials should ask: "What structures do we want to see to help support the responsible use of AI in education? ... I think one of the most important pieces is: How do we think about building the capacity and exposure of our educators around how AI can be of use?"Last May, the Education Department released a report on AI that called for keeping "humans in the loop" when using the technology to help with tasks like creating lesson plans, tutoring students, or making recommendations about how to help individual students grasp a concept.Rodríguez elaborated on that principle at the AEI event. AI tools "need to have an expectation that human judgment and teacher judgment be part of the process of learning," he said.Meanwhile, the federal government needs to ensure its laws keep pace with developments in technology, Rodríguez said, in response to a question about the main law governing student privacy, the Family Educational Rights and Privacy Act, or FERPA. FERPA was signed into law in 1974, almost 50 years ago and well before the birth of the internet.Rodríguez agreed that the law needs to be updated to reflect an environment where technology products and services, including those powered by AI, are collecting a mind-boggling quantity of student data. While rewriting FERPA — or creating new federal privacy laws to supplement it — will be up to Congress, the Education Department has already begun conducting listening sessions to inform a rewriting of the regulations, or rules governing the law, Rodríguez said."How we utilize data, how we collect that data looks so different than it did back" in the 1970s when the law was passed, Rodríguez said. "Think about the average of 148 tech tools that are being used every year by students or by their teachers, many of those tools gathering student data. We need a more modern policy infrastructure to match the technological infrastructure we're seeing."©2023 Education Week (Bethesda, Md.). Distributed by Tribune Content Agency, LLC.GovTech, 17d ago
I was left very impressed by the organization and quality of speakers at this event, given how it provided a platform for all different kinds of thoughts at a time when it is so needed. Above all, I appreciated the emphasis on the importance of public engagement – panelists made sure to mention that this groundwork is hard but that this should not distract from its importance. In this way, we can opt for public engagement proportional to the risk and scale of AI technology – the more serious the calculated technological impact, the stronger the imperative to engage the consumer/recipients of the technology. Public engagement should not be a tick-box exercise. The organizations and companies that follow this line of action most thoroughly will be the most successful in the long run.Montreal AI Ethics Institute, 5d ago
Some digital rights groups fear that the order could result in little oversight.“Biden has given the power to his agencies to now actually do something on AI,” Caitlin Seeley George, managing director at Fight for the Future, a nonprofit group that advocates for digital rights, said in an email. “In the best case scenario, agencies take all the potential actions that could stem from the executive order, and use all their resources to implement positive change for the benefit of everyday people.”“But there’s also the possibility that agencies do the bare minimum, a choice that would render this executive order toothless and waste another year of our lives while vulnerable people continue to lose housing and job opportunities, experience increased surveillance at school and in public, and be unjustly targeted by law enforcement, all due to biased and discriminatory AI,” she said.NIST is likely to play a pivotal role in creating new safety standards on AI.Vice President Kamala Harris announced at the Global Summit for AI Safety in the U.K. last week that under Biden’s order NIST will establish an AI Safety Institute, which she said “will create rigorous standards to test the safety of AI models for public use.” But a study of NIST’s physical and financial needs mandated by Congress and completed by the National Academies of Sciences, Engineering, and Medicine in February found serious deficiencies at the agency.“A substantial number of facilities, in particular the general purpose laboratories, have functional deficiencies in meeting their environmental requirements for temperature and humidity, and of electrical systems for stability, interruptibility, and for life safety,” the report said about NIST’s facilities. “Most of the older laboratories that have not been renovated fail to provide the functionality needed by world-class scientists on vital assignments of national consequence.”As a result, the National Academies report said, “these deficient functionalities of NIST’s facilities constitute a major threat to its mission performance and thereby, to our nation’s economy, national security, and quality of life.”Congress appropriated $1.65 billion for NIST in fiscal 2023. A spokesperson for NIST did not respond to questions on whether the agency plans to seek an increase in funding to meet the new requirements under the order.But NIST will likely need to double its team of AI experts to 40 people to implement the president’s order, said Divyansh Kaushik, the associate director for Emerging Technologies and National Security at the Federation of American Scientists, who has studied NIST’s needs.The agency will also need about $10 million “just to set up the institute” announced by Harris, Kaushik said. “They don’t have that money yet.”In another concern, to attract the top AI talent to write safety standards and develop protocols for safety testing by the world’s largest AI companies, the agency needs “to be able to match market salaries,” Kaushik said. “But that’s obviously not going to happen.”Congressional action on AI could address several worries about the role of agencies as well as in bridging funding gaps, said Tony Samp, senior policy adviser at the law firm of DLA Piper and onetime aide to Sen. Martin Heinrich, D-N.M. Heinrich is one of the three lawmakers advising Senate Majority Leader Charles E. Schumer, D-N.Y., on the congressional approach to AI legislation.“There are unique areas of responsibility that Congress has that go beyond what an executive order is capable of doing,” Samp said. “So from a resource allocation perspective, they have the power of the purse. And if NIST is being charged with so many responsibilities, it would only make sense for [Congress] to consider legislation to bolster resources for that agency or others.”Schumer has said he supports a minimum of $32 billion in federal funding to advance AI technologies, with the money potentially going to private companies to encourage research and development in specific areas.©2023 CQ-Roll Call, Inc, Distributed by Tribune Content Agency, LLC.GovTech, 26d ago
And then our social norms and our expectations completely shift. Even though the original decision to turn down people who had just a high school degree was not right. So this is not a hypothetical situation because we are having a lot of similar cases happen when AI systems are engaged in decision-making, especially when people don’t know how to evaluate them. There’s a lot of evidence, for example, that doctors who get AI recommendations don’t know how to evaluate it, and they sometimes get confused where the AI recommendation comes from, and they may put overweight on things that they shouldn’t really overweight because they don’t understand the black box nature of these systems. So I think human judgment and the expert opinion of people who have accumulated expertise is going to be very important. This is particularly true when we start using ai not just for recruitment, but lots of other human resource tasks, for example, promotion or deciding who’s doing well or how to assign workers to different shifts. I think all of these we’re going to do much better, which if we do something broadly consistent with what I try to emphasize in my written testimony, choose a pro-human direction of ai, which means that we try to choose the AI technologies trajectory in a way that empowers humans, and then we train the decision makers so that they have the right expertise to work with ai. And that includes valuing their own judgment, not becoming slaves of the AI system. Thank you for that question.Tech Policy Press, 25d ago

Latest

...“This year has seen a great leap in generative AI,” says Gomis. “There’s a pace of innovation that is being accelerated, and unless you have the capability of operating continuously, and have enough specialized manpower and tools to keep up in this race, the system you will be working with is not going to be secure.” Speaking of the “industrialization and scale” of fraud, he says that easy access to tools that can generate deepfakes in minutes means that the playing field of security has changed, and legacy methods are past their expiry date.Biometric Update | Biometrics News, Companies and Explainers, 3d ago
Aside from mitigating suffix-based jail breaks, Robey explains that one of the most significant challenges in the field of AI safety is monitoring various trade-offs. "Balancing efficiency with robustness is something we need to be mindful of," he says. "We don't want to overengineer a solution that's overly complicated because that will result in significant monetary, computational, and energy-related costs. One key choice in the design of SmoothLLM was to maintain high query efficiency, meaning that our algorithm only uses a few low-cost queries to the LLM to detect potential jail breaks."...techxplore.com, 3d ago
Generative AI can offer useful tools across the recruiting process, as long as organizations are careful to make sure bias hasn’t been baked into the technology they’re using. For instance, there are models that screen candidates for certain qualifications at the beginning of the hiring process. As well-intentioned as these models might be, they can discriminate against candidates from minoritized groups if the underlying data the models have been trained on isn’t representative enough. As concern about bias in AI gains wider attention, new platforms are being designed specifically to be more inclusive. Chandra Montgomery, my Lindauer colleague and a leader in advancing equity in talent management, advises clients on tools and resources that can help mitigate bias in technology. One example is Latimer, a large language model trained on data reflective of the experiences of people of color. It’s important to note that, in May, the Equal Employment Opportunity Commission declared that employers can be held liable if their use of AI results in the violation of non-discrimination laws – such as Title VII of the 1964 Civil Rights Act. When considering AI vendors for parts of their recruiting or hiring process, organizations must look carefully at every aspect of the design of the technology. For example, ask for information about where the vendor sourced the data to build and train the program and who beta tested the tool’s performance. Then, try to audit for unintended consequences or side effects to determine whether the tool may be screening out some individuals you want to be sure are screened in.Hunt Scanlon Media, 3d ago
The proposed rules would require companies to inform people ahead of time how they use automated decision-making tools and let consumers opt in or out of having their private data used for such tools.Automated technology — with or without the explicit use of AI — is already used in situations such as deciding whether somebody is extended a line of credit or approved for an apartment. Some early examples of the technology have been shown to unfairly factor race or socioeconomic status into decision making — a problem sometimes known as "algorithmic bias" that regulators have so far struggled to rein in.The actual rulemaking process could take until the end of next year, said Dominique Shelton Leipzig, an attorney and privacy law expert at the law firm Mayer Brown. She noted that in previous rounds of rulemaking by the state's privacy body, little has changed from inception to implementation.The proposed rules do pose one significant departure from existing state privacy rules, she said: Requiring companies to provide notice to consumers about when and why they are using automated decision-making tools is "pushing in the direction of companies being transparent and thoughtful about why they are using AI, and what the benefits are ... of taking that approach."The rules are not the state's first run at creating privacy protections for automated decision-making tools.One bill that did not make it through the state Legislature this year, authored by Assembly Member Rebecca Bauer-Kahan, D-Orinda, sought to guard against algorithmic bias in automated systems. It was ultimately held up in committee but could be reintroduced in 2024.State Sen. Scott Wiener, D-San Francisco, has also introduced a bill that will be fleshed out next year to regulate the use of AI more broadly. That effort envisions testing AI models for safety and putting more responsibility on developers to ensure their technology isn't used for malicious purposes.California Insurance Commissioner Ricardo Lara also issued guidelines last year on how artificial intelligence can and can't be used to determine eligibility for insurance policies or the terms of coverage.In an emailed statement, his office said it "recognizes algorithms and artificial intelligence are susceptible to the same biases and discrimination we have historically seen in insurance.""The Commissioner continues to monitor insurance companies' use of artificial intelligence and 'Big Data' to ensure it is not being used in a way that violates California laws by unfairly discriminating against any group of consumers," his office said.Other Bay Area lawmakers came out in support of the privacy regulations moving forward."This is an important step toward protecting data privacy and the unwanted use of AI," said State Sen. Bill Dodd, D-Napa. "Maintaining human choice is critical as this technology evolves with the prospect for so much good but also the potential for abuse."The first hearing on the proposed rules is on Dec. 8.© 2023 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.GovTech, 3d ago
As AI continues to advance, it is critical to ensure that its development and deployment align with ethical principles. By promoting ethical AI practices, organizations can mitigate potential risks and biases, build trust with users, and ensure that AI is used for the betterment of society. Upskilling AI talent is not just a necessity but a moral imperative to shape a future where AI is used ethically and responsibly. While the future of AI may fuel and be fueled by technological advancements, it is driving the need to cultivate exceptional minds. By investing in talent development, fostering collaboration, and prioritizing ethical considerations, AI is inspiring a catalyst for progress, exceptional talent, and a brighter future.RTInsights, 3d ago
Given the sheer volume of online sales during the holiday shopping season and customers’ expectation of personalisation, both AI and the collection and analysis of customer data is becoming increasingly important in the priorities for forward-thinking brands.And it appears that brands are listening. There was a 20% increase in the utilisation of data-driven segmentation by SAP Emarsys customers, meaning shoppers were better matched with personalised offers and the items they wished to buy. With peak sending speeds of 126MM per hour, SAP Emarsys enabled its customers to reach their customers at scale and speed over the Black Friday weekend. Marketers were empowered to deliver the right deals – while adapting to market needs – to create value and grow loyalty.Customer Experience Magazine, 3d ago

Top

Firstly, we aim to continue expanding our portfolio of AI-based solutions across various industries, driving real-world impact through applications in healthcare, finance, manufacturing, and more. Moreover, fostering partnerships and collaborations with leading tech firms, research institutions, and startups is a priority, as it allows us to remain at the forefront of technological advancements. Lastly, we are committed to upholding the highest standards of ethics and sustainability in AI, ensuring that our innovations benefit society while respecting privacy, security, and environmental concerns.Insightssuccess Media and Technology Pvt. Ltd., 24d ago
Critically, these approaches should protect privacy and not collect by default personally identifiable information for content that is not AI generated. We should be wary of how any provenance approach can be misused for surveillance and stifling freedom of speech. My third recommendation is on detection alongside indicating how the content we consume was made. There is a continuing need for after the fact detection for content believed to be AI generated from witnesses, experience the skills and tools to detect AI generated media remain unavailable to the people who need them the most, including journalists, rights defenders and election officials domestically and globally. It remains critical to support federal research and investment in this area to improve detection overall and to close this gap, it should be noted that both provenance and detection are not as relevant to non-consensual sexual, where a real versus faces often beside the point. Since the harm is caused in other ways, we need other responses to that as a general and final statement for both detection and provenance, to be effective in helping the public to understand how deepfake technologies are used in the media we consume, we need a clear pipeline of responsibility that includes all the technology actors involved in the production of AI technologies more broadly from the foundation models to those designing and deploying software and apps to the platforms that disseminate content. Thank you for the opportunity to testify.Tech Policy Press, 24d ago
The Executive Order on the development and use of artificial intelligence (AI) issued by President Biden on October 30 is a directive that contains no fewer than 13 sections. But two words in the opening line strike at the challenge presented by AI: “promise” and “peril.”As the document’s statement of purpose puts it, AI can help to make the world “more prosperous, productive, innovative, and secure” at the same that it increases the risk of “fraud, discrimination, bias, and disinformation,” and other threats.Among the challenges cited in the Executive Order is the need to ensure that the benefits of AI, such as spurring biomedical research and clinical innovations, are dispersed equitably to traditionally underserved communities. For that reason, a section on “Promoting Innovation” calls for accelerating grants and highlighting existing programs of the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program from the National Institutes of Health (NIH). And the Colorado School of Public Health is deeply involved in the initiative.ColoradoSPH helps ensure that artificial intelligence serves and empowers all peopleAIM-AHEAD is a national consortium of industry, academic and community organizations with a “core mission” to ensure that the power of AI is harnessed in the service of minorities and other groups historically neglected or poorly served by the healthcare system. A key focus – though not the only one – is using AI to probe electronic health records (EHRs), which can be rich sources of clinical and other data.“The goal of [AIM-AHEAD] is to use this technology to try to eliminate or better understand and address health disparities,” said Evelinn Borrayo, PhD, associate director of research at the Latino Research and Policy Center (LRPC) of ColoradoSPH and Director for Community Outreach and Engagement at the CU Cancer Center. “This consortium is about the inclusion of communities that historically tend to be left behind.” Borrayo and Spero Manson, PhD, director of the Centers for American Indian and Alaska Native Health (CAIANH) at ColoradoSPH, co-direct the North and Midwest Hub of the AIM-AHEAD initiative, a sprawling 15-state area. Both are also members of the AIM-AHEAD Leadership Core.The hub, which is housed within CAIANH and ColoradoSPH, serves a variety of “stakeholders” who can help to develop AI, including Hispanic/Latino community health organizations, tribal epidemiology centers, urban Indian health centers, and more.Addressing the shortfalls of AI and machine learning developmentManson acknowledged that the last decade has brought “an explosion of interest as well as investment” in exploring the promise of AI and machine learning (ML) – which uses algorithms to train computers to perform tasks otherwise assigned to humans – and applying that knowledge to improving healthcare.“There have been substantial areas of achievement in that regard,” Manson said. But he said the work has also revealed “substantial bias” in the algorithms and predictive models as they are applied to “underrepresented and marginalized populations.”He noted, for example, that the data in EHRs may be incomplete because of barriers to care that people face, including socioeconomic status, race and ethnicity, and geography. In that situation, AI and ML don’t correct for these factors because the technology uses the EHR itself to analyze the data and make predictions, Manson said.That’s why deepening the reservoir of data in EHRs and other repositories is imperative for the development of AI and ML, he said.“The idea is to improve healthcare for all citizens, not just those that have benefited narrowly in the past,” he noted.Improving the diversity of AI workforceIn addition, the workforce of scientists working on AI and ML lacks diversity, while the benefits of research in the field have not yet adequately spread to underserved communities, Manson said.The North and Midwest Hub has undertaken several “outreach and engagement” projects to meet the goals of AIM-AHEAD, with ColoradoSPH playing a significant role.For example, two pilot projects aim to build capacity for applying AI and ML to aid communities. In one, Clinic Chat, LLC, a company led by Sheana Bull, PhD, MPH, director of the mHealth Impact Lab at ColoradoSPH, is collaborating with Tepeyac Community Health Center, which provides affordable integrated clinical services in northeast Denver. The initiative, now underway, uses Chatbots to assist American Indian/Alaska Native and Hispanic/Latino people in diagnosing and managing diabetes and cancer.A second project is working toward incorporating AI and ML coursework into the curriculum for students earning ColoradoSPH’s Certificate in Latino Health.“It’s an opportunity to introduce students to how using AI and ML can help us understand and benefit the [Latino] population,” Borrayo said. The idea is to build a workforce with the skills to understand the unique healthcare needs of Latinos and apply AI and ML skills to meet them, she added.“One of the approaches we are also taking is reaching students in the data sciences,” Borrayo said. “We can give those students the background and knowledge about Latino health disparities so they can use those [AI and ML] skills as well.”Building a generation that uses AI to improve healthcareManson also noted that the North and Midwest Hub supports Leadership and Research fellowship programs, which are another component of what he calls “an incremental capacity-building approach” to addressing the goals of AIM-AHEAD.“We’re seeking to build successive generations, from the undergraduate through the doctoral/graduate to the early investigator pipeline, so these individuals move forward to assume positions of leadership in the promotion of AI and ML,” Manson said.Borrayo said that she is most interested in continuing to work toward applying solutions for these and other issues in communities around the region. She pointed to the Clinic Chat project as an example of how AI and ML technology can be used to address practical clinical problems.“I think understanding the data, algorithms and programming is really good for our underrepresented investigators to learn,” she said. “But for our communities, I think the importance lies in the application.How can we benefit communities that are typically left behind or don’t have access to healthcare in the ways most of us do?”For Manson, a key question is how members of American Indian/Alaska Native, Latino, and other communities can “shift” from being “simply consumers and recipients” of work in AI and ML and “become true partners” with clinicians and data specialists in finding ideas that improve healthcare.“The field will be limited in terms of achieving the promise [of AI and ML] until we have that kind of engagement with one another,” Manson said.cuanschutz.edu, 4d ago

Latest

A: When I first started at e2open, I was not aware of the importance of having a solid supply chain management system. During my time here, Brexit was a big bump in the road regarding supply chains, but e2open’s solutions have been able to help so many businesses run successfully with our array of solutions to cater to a variety of clients. E2open is constantly developing products to make sure we are up to date with the relevant supply chain trends, like the use of AI.Supply Chain Software | Strategic Digital Supply Chain | e2open, 3d ago
This section doesn’t bar a finding of fair use in other scenarios that are not delineated within. If your interpretation of fair use was realized, it would make even making AI completely impossible to begin with, because in order to make an LLM you need an incredibly large corpus of data. The total licensing cost of that would be potentially in the trillions of dollars, if not quadrillions. So I’m pretty sure a court would find that trying to make fair use in those circumstances inapplicable would make the burden on those who wish to use the work far too burdensome and therefore would annul the original complaint of copyright infringement to begin with. And we’re just talking about LLMs! If you want to make it impossible to train actually good AIs, then we can realize your ridiculous notion of fair use. But even realizing that kind of interpretation risks running afoul of the copyright clause, which is very clear: “[the United States Congress shall have power] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” The important part is “promote the Progress of Science and useful Arts”. Making it untenable to make AIs without paying impossible-to-pay copyright licensing fees would go against that part of the clause, because AIs, in order to be as good as they can be, need huge datasets.Techdirt, 3d ago
Ideally, you’d like a shape that is equilateral across all sides (a pentagon). That shows that you are able to optimize across all fitness functions. But the reality is that it will be challenging to achieve that shape—as you prioritize one fitness function, it will affect the lines for the other radius. This means there will always be trade-offs depending on what is most important for your generative AI application, and you’ll have a graph that will be skewed towards a specific radius. This is the criteria that you may be willing to de-prioritize in favor of the others depending on how you view each function. In our chart, each fitness function’s metric weight is defined as such—the lower the value, the less optimal it is for that fitness function (with the exception of model size, in which case the higher the value, the larger the size of the model).CoinGenius, 3d ago
Another instance illustrating effective use-case strategy is found in the context of the Health Insurance Portability and Accountability Act (HIPAA). Although HIPAA was established before the widespread use of algorithms in healthcare, it has consistently played a pivotal role in shaping how AI tools handle patient data and maintain health privacy. Serving as a formidable guardrail, HIPAA compels healthcare technologies to prioritize the protection of patient data, ensuring that the integration of AI aligns with stringent standards of privacy and security, without restricting innovation in the sector.Tech Policy Press, 4d ago
...“Investors that succeed in a challenging market will be those that prioritise developing more intelligent data driven insights and leveraging technology to improve efficiency and decision making. The other thing that needs to happen to encourage growth is a better transaction process that facilitates more efficient and cheaper buying and selling of commercial and residential real estate. Particularly in more challenging economic conditions, protracted transactions which are slowed down by lack of data, resources or manpower can derail a perfectly viable deal. We can’t be in that position going into 2024, especially with the range of technologies we have at our fingertips. It was fantastic to see the Chancellor confirm in his Autumn Statement an addition £3 million to digitise local council data and develop new technological solutions to speed up residential transactions. We hope that signals further investment from both industry and government to develop and integrate new technologies, particularly game-changing innovations like AI, which could dramatically speed up transaction times, saving individuals and businesses a huge amount of time and money, and creating a sector that’s better primed for growth.”...Legal Futures, 4d ago
At the edge where AI is being built into much smaller and less sophisticated systems, the potential pitfalls can be very different. “When AI is on the edge, it is dealing with sensors, and that data is being generated in real time and needs to be processed,” said Sharad Chole, chief scientist at Expedera. “How sensor data comes in, and how quickly an AI NPU can process it changes a lot of things in terms of how much data needs to be buffered and how much bandwidth needs to be used. How does the overall latency look? Our objective is to target the lowest possible latency, so from the input from the sensor to the output that maybe goes into an application processor, or maybe further processing, we’d like to keep that latency as low as possible and make sure that we can process that data in a deterministic fashion.”...Semiconductor Engineering, 4d ago

Top

Henrike: I think there are two angles to this. First of all we are a regulator, so we need to set guides and rules for industry on complex models such as AI. Secondly, as an organisation we have to think about how we want to use the technology. And, of course we need to align what we want to do internally with what we’re saying externally. It’s very exciting, and both Nat and I have been involved in the external engagement but also the development of internal frameworks for what kind of AI we want to use and for what purposes. We have to consider that whatever we want to do is not only compliant with our own handbook but is also ethical. We have conducted various pieces of research over the past year, and have learned a lot from the responses. Findings from our 2 year joint project with the Bank of England are set out in some detail in our AI Public-Private Forum final report, and for me that was such an insight into the debate around AI use and probably one of the pieces of work I’m most proud of. It’s critical to be able to understand the challenges of using AI within financial services as well as establishing how we can support safe adoption of the technology, so it was invaluable to understand the views of a community of practitioners, separated from the hype.bcs.org, 12d ago
The Biden administration’s recent executive order and enforcement efforts by federal agencies such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms. And though large language models, such as GPT-3 that powers ChatGPT, and multimodal large language models, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. It’s important to consider the biases that result from widespread use of large language models.For example, these models could exhibit biases resulting from negative stereotyping involving gender, race or religion, as well as biases in representation of minorities and disabled people. As these models demonstrate the ability to outperform humans on tests such as the bar exam, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting, and that stakeholders have the authority to enforce such standards.Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.Anjana Susarla, Professor of Information Systems, Michigan State UniversityThis article is republished from The Conversation under a Creative Commons license. Read the original article.GovTech, 6d ago
C2RO is a member of Lenovo’s AI Innovators Program, which is a partnership of AI software companies leveraging Lenovo’s hardware and services to help deliver innovative solutions for customers. The partnership has enabled C2RO to do the heavy lifting of video-to-data analytics by providing ready-to-deploy infrastructure based on Lenovo’s ThinkSystems SE350, SE450 or SR630 servers, which are tailor-made for AI-centric applications. AI Innovators makes these available on a sliding-scale payment process, Forlini said.“In essence, this works as a pay-as-you-go (process), where this hardware might be worth X dollars, but if you’re really only using 20 percent of it, you’re only paying for 20 percent of it until you scale up the usage of the box itself,” Forlini said. “Our Lenovo partners empower every customer to seamlessly scale into the full range of services they require.”The interaction with Lenovo AI Innovators seems to deliver benefits to both C2RO and its customers.“Coming from a rich hardware background, I hold a deep admiration for the intricacies of system design. It’s not just the sleek hardware that captures my attention, but the unyielding dedication to reliability, accessibility, and serviceability. When you pair top-tier hardware with stellar after-sales support, you’ve got a winning combination,” explained Forlini. “This is pivotal because when your clients rely on their data day in and day out to power their business engines, the last thing you want is hardware-induced hiccups. Our clients are devoted to their data, and it’s our duty to ensure it’s at their beck and call 24/7. Lenovo empowers me to deliver that kind of solution to our valued clients.”Lenovo servers just work, Forlini added. “But now Lenovo works with us to really truly ensure that when this hardware gets deployed at the customer site, that the customer is not going to see any integration issues between our AI and the hardware.” AI Innovators takes this a step further, Forlini said: “We take these great hardware platforms, then we take our AI solution we validated on preconfigured predefined configurations of hardware to ease the deployment globally of this technology and make it easier for customers to scale across thousands and tens of thousands of locations”.The Next Platform - In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds., 26d ago
This panel is a window into the efforts of industry leaders to leverage existing governance procedures in privacy and other domains as they tackle artificial intelligence governance challenges. Panelists shared examples and lessons learned from engaging with product teams and the C-suite to ensure AI systems meet privacy requirements, while also addressing the additional equities and risks at issue from the training and deployment of AI. Ohio State University Mortiz College of Law professor Dennis Hirsch shared his knowledge of governance and accountability by highlighting the diverging paths companies are taking to prepare for a regulated AI future. What are the biggest risks we face? In comparing and contrasting these expert perspectives, other professionals can better understand how to adapt their own internal structures to the demands of new risks. When responsibility is shared, how do we make sure we get it right? Panel: Moderator Dennis Hirsch, Rohit Chauhan, Patrice Ettinger, Pagona Tsormpatzoudi, Jane Von Kirchbach.iapp.org, 20d ago
To that end, Schweizer recently held an Artificial Intelligence Summit at the Aurora Public Library, designed to both inform people about AI and get a conversation going about its practical application in Aurora.The former Aurora Preservation Commission member is active with several local non-profits, and is currently working on AI visualization and interactive tools for his company.Global Data Sciences Inc. provides scientific, data-focused approaches to developing and executing strategies for companies.As part of the event, he called on the city of Aurora to develop an AI action plan, similar to what New York City unveiled in October. As far as Schweizer and anyone else could tell, New York is the only city to have such a plan so far, although it's impossible to say how many might be in the works."The city of Aurora needs an action plan, and citizens need to have input," Schweizer said. "If you don't know where you're going, how are you going to get there?"The New York plan, produced by the city's Office of Technology and Innovation. involved some 50 city employees from 18 agencies, as well as the insights of industry, academia and civil society, according to the plan itself.The plan calls for seven broad-based initiatives, including calling for a governance framework, such as an AI Steering Committee; establishing principles and guidelines; pursuing monitoring of AI tools; having an external advisory network; sharing information to other governments; fostering public engagement; building AI knowledge and skills in city government; and reporting yearly on the city's progress.Broadly defined, AI describes a wide variety of technologies that use data to make predictions, inferences, recommendations, rankings or other decisions, according to the New York plan."While AI technologies have recently captured the public imagination by producing images and text on command, the reality is that they have existed for decades in diverse forms and uses," the plan adds.Such technologies include tools that filter spam from email, support medical care and optimize the use of energy in homes and workplaces."We tend to think of AI as a thing, but it's not," Schweizer said. "It's just a computational way of writing things out. There are a lot of different kinds."Schweizer noted that many of the abuses or pitfalls of AI are real, which is why validation and verification is necessary with any AI product.AI tools can be helpful in organizing and summarizing the vast number of data sets currently being used and accumulated daily. It could help a city, for instance, fill Freedom of Information requests for information that are too unwieldy to do right now."AI is a hammer," Schweizer said. "I can bring that hammer and smash a window. Or I can build something with it."Schweizer said he hopes to hold another AI summit in January.© 2023 The Beacon-News (Aurora, Ill.). Distributed by Tribune Content Agency, LLC.GovTech, 20d ago
The company has pioneered a comprehensive AI marketing assistant, designed to optimize efficiency across five critical domains. These include automated advertising material generation, seamless data integration for insights, streamlined customer journey planning, enhanced capture of interactive sales opportunities, and fortified conversational marketing strategies. Appier is dedicated to translating AI into tangible returns on investment through the power of software intelligence. Over the past four years, the company's product portfolio and market presence have seen continuous expansion, driving remarkable operational growth. With revenue growth surpassing fourfold in just five years, Appier remains at the forefront of innovation, leveraging generative AI's creative prowess and decision-making AI's capabilities to fortify its competitive edge. Amidst rising inflation, consumers are increasingly cautious in their spending choices. To capture consumer attention during this inflationary era, leveraging AI to enhance the digital advertising and marketing experience emerges as the key to success. Notably, generative AI currently holds a modest 3% share of the global digital marketing market. According to MarketResearch.biz, the adoption of generative AI in digital marketing could potentially reach a global market value of $19.5 billion by 2032, boasting an impressive average annual growth rate (CAGR) exceeding 29%. This underscores the significant growth potential of generative AI. Businesses that seize the opportunity to harness generative AI to boost productivity, coupled with the predictive power of AI to ensure a return on investment, can secure a competitive edge and bolster brand resilience. Appier also underscores the profound impact and emerging possibilities presented by generative AI. AI has ushered in a genuine computer era, propelling global enterprises beyond the realm of digital transformation and into an era powered by AI. The ascendancy of generative AI has not only revolutionized white-collar work but has also redefined the approach taken by marketers in crafting advertisements, shaping marketing content, and engaging in conversational commerce. Faced with this paradigm shift, businesses can embark on this journey by prioritizing relevant tasks, forming cross-functional teams, conducting small-scale trials, identifying models tailored to their organizations, and efficiently harnessing external resources. Beginning in the fields of advertising and marketing, they can optimize operational efficiency and extend the use of AI for broader business decision-making. Dr. Chih-han Yu, CEO, and Co-founder of Appier, shares this vision - "Appier consistently aligns with our clients, prioritizing a customer-centric ethos. Our team of domain experts spearheads innovative solutions, providing customers with the industry's most advanced AI technology, fortified by our proprietary technology and expanded services. We facilitate real-time data visualization, significantly reducing the time from data acquisition to actionable insights. This empowers our clients to develop cutting-edge AI models using their exclusive data, ultimately enhancing their brand's sustained competitiveness. Through the utilization of generative AI technology, we continuously elevate advertising and marketing outcomes, seamlessly integrating decision-making AI with domain expertise, effectively realizing our mission of 'Making AI easy by making software intelligent." Amid the ongoing economic volatility, Appier is actively creating a comprehensive AI marketing assistant for businesses. The primary goal is to enhance efficiency across five critical objectives: 1. Automatically generating advertising materials Digital advertising is a multifaceted domain that demands significant attention from marketers. It involves monitoring material performance, refreshing content, and creating seasonal visuals. Appier's marketing material generation instructions simplify this process using basic product images and audience profiles. Generative AI swiftly produces multiple sets of advertising materials that align with the audience and season. For instance, in the context of a food delivery app, Appier automates the creation of tailored advertising materials by considering business or product specifics, weather conditions, and audience interests. Real-time tracking enables the AI model to predict and generate similar materials, consistently enhancing advertising effectiveness. 2. Data integration for insights Unifying fragmented data across diverse marketing channels and swiftly extracting valuable insights has long posed a challenge for businesses and marketers. Appier's AI customer data platform offers a seamless solution by effortlessly consolidating data from various origins, encompassing online and offline sales data, user interactions across web and app marketing channels, and external system data. Once integrated, the platform creates a comprehensive 360-degree user profile, delivering clear insights into the user's journey at every interaction point. This real-time analysis allows for proactive exploration of potential customer needs. 3. Automated customer journey planning Appier's Co-pilot, driven by generative AI technology, takes the initiative to recommend customer journeys, streamlining work processes for marketers. To benefit from this feature, marketers simply need to outline their marketing needs, for example, "Assist in devising a new product promotion campaign, including step-by-step delivery of product details, features, and early bird offers." Generative AI can then promptly generate a user journey based on these guidelines, significantly reducing the time spent on the manual setup of each marketing stage. This in turn enables marketers to concentrate on the broader aspects of brand strategy planning. 4. Capturing interactive sales opportunities Each moment a customer engages with a website holds significant sales potential. Appier's AI models swiftly identify a customer's purchase intent and sensitivity to pricing, drawing insights from their collected user behavior data. Subsequently, it provides personalized product recommendations to streamline the purchasing process. For instance, when the system detects a price-sensitive customer, it may offer discounts to encourage them to finalize the purchase. On the other hand, for less price-sensitive customers, it can proactively suggest premium products or product bundles, thereby enhancing the average order value. 5. Strengthening interactive conversational marketing Appier's conversational marketing platform operates in real-time, employing company-provided document data, including product specifics and FAQs, to educate AI models. This training results in contextually relevant dialogue content, ensuring more precise responses and elevating customer satisfaction. Furthermore, Appier's AI Click Optimization feature is tailored to segment-specific communication. It leverages analysis of past push notification records, user behavior, and preferences to target messages toward users with a higher likelihood of engagement. This not only saves businesses costs but also prevents the delivery of irrelevant messages to disinterested users. In the face of unpredictable economic conditions, Appier's harmonious blend of generative AI's creativity and decision-making AI's prowess plays a pivotal role in enabling brands to harness sales opportunities in various marketing scenarios. This, in turn, elevates the brand's distinctiveness, which holds particular significance in an uncertain economic climate. Hashtag: #Appier #GenerativeAI #marketing #artificialintelligence #martechhttps://www.appier.com/en/https://www.linkedin.com/company/appier/...SME Business Daily Media, 13d ago

Latest

Also, many aspects of AI will require intermediaries between the technology and end users, be they businesses or consumers, to facilitate its widespread adoption. It will be necessary to develop solutions to the bottlenecks related to its expansion, such as infrastructure, sustainable management of large data flows, and availability of advanced hardware resources. We will need to seek energy-efficient solutions, ensuring that AI works in harmony with the planet rather than against it.Zephyrnet, 4d ago
The risks associated with generative AI have been well-publicized. Toxicity, bias, escaped PII, and hallucinations negatively impact an organization’s reputation and damage customer trust. Research shows that not only do risks for bias and toxicity transfer from pre-trained foundation models (FM) to task-specific generative AI services, but that tuning an FM for specific tasks, on incremental datasets, introduces new and possibly greater risks. Detecting and managing these risks, as prescribed by evolving guidelines and regulations, such as ISO 42001 and EU AI Act, is challenging. Customers have to leave their development environment to use academic tools and benchmarking sites, which require highly-specialized knowledge. The sheer number of metrics make it hard to filter down to ones that are truly relevant for their use-cases. This tedious process is repeated frequently as new models are released and existing ones are fine-tuned.CoinGenius, 4d ago
AI News caught up with Victor Jakubiuk, Head of AI at Ampere Computing, a semiconductor company offering Cloud Native Processors. We discussed how they are driving high-performance, scalable and energy-efficient solutions built for the sustainable cloud.In today’s business landscape, artificial intelligence (AI) has been an undeniable game-changer, driving innovation and competitiveness across all industries. Yet, a critical hurdle has recently emerged as companies rapidly shift to cloud-native processes: a severe shortage of servers and rising operational costs. With soaring demand for computational power risking the seamless integration of AI-driven initiatives, businesses now face the urgent task of finding innovative, affordable and far more sustainable solutions to tackle this shortage – a shortage only set to continue.The impact on the environment is also a concern. A new study reveals that by 2027, the AI industry could consume as much energy as a country like Argentina, Netherlands, or Sweden. With energy-intensive graphics processing units (GPUs) a popular choice for AI workloads, this computing power has seen energy consumption and carbon footprints hit unprecedented highs.As businesses scale their digital footprints, the imperative for sustainability becomes increasingly important. The environmental impact of energy-intensive hardware poses a moral and practical challenge, demanding solutions that reconcile performance with responsible resource usage.“Efficiency is key in the future of computing,” explains Jakubiuk. “While the universe of compute and data is expanding exponentially, the focus on energy efficiency in individual workloads is notably increasing.”“Historically, GPUs were the go-to for AI model training due to their compute power. However, they are power-hungry and inefficient for production,” he says. “Deploying GPUs for inference transfers these inefficiencies, compounding power, cost, and operational complexities.”This is all the more notable as processes scale, he warns. While AI training requires large amounts of compute up front, AI inferencing can require up to 10x more total compute over time, creating an even larger problem as the scale as AI usage increases.Ampere Computing has set out to deliver solutions that meet these needs. “We focus solely on efficient AI inference on less energy-hungry central processing units (CPUs), delivering unparalleled efficiency and cost-effectiveness in the cloud,” Jakubiuk says. “Our software and hardware solutions offer a seamless transition without necessitating an overhaul of existing frameworks, setting us apart from closed-source alternatives.”...AI News, 4d ago

Latest

The chatbot celebrates its first birthday today, charting a journey of the company having been able to develop the system, as well as make improvements, within a very brief space of time. When considering the rapid transformations that ChatGPT has undergone in simply a year, it is clear that companies like OpenAI will only continue to push the boundaries of AI that can be used by the everyman.aimagazine.com, 4d ago
The emergence of AI-generated art has sparked discussions about the future of art itself. Some see AI as a tool that can augment and enhance human creativity, while others believe that it could eventually lead to the creation of art that surpasses human capabilities. Regardless of its ultimate impact, AI is undoubtedly transforming the art world, pushing the limits of what is possible and challenging our traditional notions of art and creativity.Dataconomy, 4d ago
As the insurance industry continues its journey into an AI future, there is much to consider along the way, including ensuring that AI use is fair and ethical: “Insurers should have ethical AI principles, a supporting assurance framework and ethical assessment processes,” emphasises Nicholson: “It is also important to have an ethics committee with a diverse and broad range of contributors ensuring that adequate challenge is provided to any new developments.”...Insurance Post, 4d ago
...“It is critical that a business is able to store, access and work with high-quality data to work with generative AI,” said Dr Swami. “Although creating and operating a solid data foundation is not a new idea, but it does need to evolve today and embrace a comprehensive integrated set of data services that can account for the scale and type of use cases that will be dealt with here – and AWS has the broadest set of database services that can deliver every type of service at the point, all with the best price performance. Organizations also need tools to work with data and be able to catalogue and govern data too – and across all these areas AWS provides the tools to be able to execute these functions and tasks.”...computerweekly.com, 4d ago
In a rapidly evolving business technology landscape, artificial intelligence (AI) has emerged as a transformative force in management. The predictive capabilities of AI have equipped managers with data-driven foresight, enabling them to monitor and anticipate market trends, business risks, customer preferences, and employee behaviors, thereby facilitating more evidence-based decisions. However, as we explore the future of management, we recognize that the potential of AI extends beyond prediction. The emerging generative capabilities of AI represent a leap forward, fostering creativity and enabling innovative ideas, designs, and solutions. With its user-friendly interface, generative AI makes it easier for a broader swath of the population to get involved in AI-enabled problem solving. The synergies between the predictive and generative capabilities of AI are undeniable. Predictive insights fuel generative processes, while generative outputs enhance predictive accuracy. This powerful extension of AI, from prediction machines to generative problem-solvers, presents the potential for AI to transform a host of conventional management practices, heralding an era where artificial agents complement and potentially replace managers and knowledge workers in a variety of organizational settings. These developments have the potential to fundamentally alter the nature of the firm, the future of work, and management theories.AOM_CMS, 4d ago
Innovations using AI in the field of clinical trials are here to stay. But, with great power comes great responsibilities. Hence, there is a need for robust safeguards to protect the security of personal data. While the mission to have more efficient clinical trials and more inclusive participation is laudable, the advocates, the author included, are fully aware of the risks of unregulated use of AI and are working hard to ensure AI innovation follows regulations and complements human expertise. In short, we need to ensure the responsible and ethical use of AI, irrespective of whether regulation is in place or still in progress.Fast Company, 4d ago

Latest

Perhaps most importantly, leaders and educators need to resist the temptation to become overly focused on—or even panicked about—how AI might change teaching and learning. The dawn of ubiquitous AI should serve as a reminder that children still need to develop a deep foundation of knowledge to use these tools well, and that the best use of AI in traditional schools is to free up the time of educators to do more work directly with students. Outside of schools, AI can help cultivate the “weirder” ecosystem of educational options needed for a system of education that empowers families to access the educational opportunities their children need to thrive. When used thoughtfully, AI tools have the potential to move us closer to an education system that provides a more diverse range of experiences to meet the unique needs of every student.The Thomas B. Fordham Institute, 4d ago
Newswise — In a time when the Internet has become the main source of information for many people, the credibility of online content and its sources has reached a critical tipping point. This concern is intensified by the proliferation of generative artificial intelligence (AI) applications such as ChatGPT and Google Bard. Unlike traditional platforms such as Wikipedia, which are based on human-generated and curated content, these AI-driven systems generate content autonomously - often with errors. A recently published study, jointly conducted by researchers from the Mainz University of Applied Sciences and Johannes Gutenberg University Mainz (JGU), is dedicated to the question of how users perceive the credibility of human-generated and AI-generated content in different user interfaces. More than 600 English-speaking participants took part in the study.As Professor Martin Huschens, Professor for Information Systems at the Mainz University of Applied Sciences and one of the authors of the study, emphasized: "Our study revealed some really surprising findings. It showed that participants in our study rated AI-generated and human-generated content as similarly credible, regardless of the user interface." And he added: "What is even more fascinating is that participants rated AI-generated content as having higher clarity and appeal, although there were no significant differences in terms of perceived message authority and trustworthiness – even though AI-generated content still has a high risk of error, misunderstanding, and hallucinatory behavior."The study sheds light on the current state of perception and use of AI-generated content and the associated risks. In the digital age, where information is readily available, users need to apply discernment and critical thinking. The balance between the convenience of AI-driven applications and responsible information use is crucial. As AI-generated content becomes more widespread, users must remain aware of the limitations and inherent biases in these systems.Professor Franz Rothlauf, Professor of Information Systems at Johannes Gutenberg University Mainz, added: "The study results show that – in the age of ChatGPT – we are no longer able to distinguish between human and machine language and text production. However, since AI does not 'know', but relies on statistical guessing, we will need mandatory labeling of machine-generated knowledge in the future. Otherwise, truth and fiction will blur and people cannot tell the difference." It remains a task of science communication and, not least, a social and political challenge to sensitize users to the responsible use of AI-generated content.newswise.com, 4d ago
When AI starts by building extremely general models and then attempting to apply them to specific educational situations, risks abound. Thus, a second aspect of my proposal suggests that our efforts towards powerful, safe AI should begin with well-bounded problems. One that seems well suited to today’s AI is determining how to provide optimal supports for learners with disabilities to progress in mathematics problem solving. Although I believe parents are not willing to share their students’ data in general, I can imagine a collective of parents becoming highly motivated to share data if it might help their specific neurodiverse student thrive in mathematics. Further, only limited personal data might be needed to make progress on such a problem. Thus a second element of my proposal is (2) energize nonprofits that work with parents on specific issues to determine how to achieve buy-in to bounded, purpose-specific data sharing. This could involve a planning grant stage, which if successful, would result in money needed to establish a local privacy-protected method of sharing data.The Thomas B. Fordham Institute, 4d ago
Perhaps because of this, there is a growing focus on building trust in media, in government, and in AI systems. When it comes to data-centric technologies, this raises important questions, including: Can trust be built into systems that users have determined to be untrustworthy? Should we be thinking of trust as something that is declining or improving, something to be built into AI and other data-centric systems, or as something that is produced through a set of relations and in particular locations? Where else, besides large institutions and their technologies, is trust located? How do other frames of trust produce community-centered politics such as politics of refusal or data sovereignty? What can community-based expertise tell us about how trust is built, negotiated, and transformed within and to the side of large-scale systems? Is there a disconnect between the solutions to a broad lack of trust and how social theorists, community members, and cultural critics have thought about trust?...Data & Society, 4d ago
Newswise — A new AI-assisted molecular diagnostic platform capable of identifying variants of COVID-19 and other infectious diseases has been developed by scientists in the UK. The low cost, portable device could play a crucial role in preventing future pandemics due to its accuracy and versatility.Scientists from the University of Surrey, Brunel University London, and Lancaster University in collaboration with the NHS, GB Electronics (UK) Ltd and Vidiia Ltd, have created the platform known as VIDIIA Hunter (VH6). It uses reverse transcription loop-mediated isothermal amplification (RT-LAMP) technology in combination with an artificial intelligence (AI)-based deep learning model. The AI model has been trained to read the results of tests identifying infectious diseases, including COVID-19 and removes users’ interpretation and errors whilst improving accuracy.Professor Roberto La Ragione, Professor of Veterinary Microbiology and Pathology at the University of Surrey, said:“Lateral flow tests are an efficient way of testing if you have COVID-19, however, there has always been a question mark over their accuracy which has only been heightened with the emerging number of variants now in circulation. As COVID-19 continues to evolve, we need to evolve with it and have highly accurate tests that can be readily used without the need for laboratory facilities.”To confirm the accuracy of VH6 scientists tested 150 COVI-19 positive clinical nasal swabs with a range of viral loads and 250 negative samples provided by NHS Berkshire, Surrey Pathology and Royal Lancaster Infirmary, Lancaster. The test was found to be highly accurate with a detection rate of 98 percent and a specificity of 100 percent. Additional testing found the device detected all the COVID-19 variants that have circulated in the UK since December 2020.Dr Aurore Poirier, first and co-corresponding author of the study and Research Fellow B at the University of Surrey, said:"The VH6 diagnostics platform has been approved for COVID-19 testing in the UK, but also has to potential to diagnose current and emerging infectious disease and antimicrobial resistance. Its portability, rapidity, accuracy, and affordability allow for near patient testing, in all laboratory and healthcare settings, including low-resources ones. The VIDIIA Hunter therefore has the potential to help control future outbreaks."To monitor and track the spread of COVID-19 and other infectious diseases, the test is connected to a smartphone app that allows an operator to manage and track the patients and samples. Results and graphs are displayed on the app in as little time as 20-30 minutes and automatically connects to a cloud. The platform allows near-patient testing and has the potential to detect other infectious diseases such as tuberculosis and dengue fever, and antimicrobial resistance.Unusually, the test can be used for human and animal healthcare which is a crucial step in identifying any future zoonotic diseases which could spread between the two.Professor Muhammad Munir, Professor of Virology and Viral Zoonosis at Lancaster University said,“Incorporation of LAMP technology with advanced modules of AI has empowered the earliest, reliable and economical detection of infections, including COVID-19, and holds potential for the detection of diseases in both humans and animals, making it a tool of significant medical importance.”The VH6 has now been approved for medical use in the United Kingdom under the UK Health Security Agency’s Medical Devices (Coronavirus Test Device Approvals, CTDA) Regulations 2022 and is CE-IVD marked and MHRA registered.A study using this diagnostic platform has been published in Frontiers in Molecular Biosciences.newswise.com, 4d ago
A critical aspect of the conference is navigating the regulatory landscape associated with Gen-AI adoption. Participants will engage in discussions addressing regulatory, ethical, and legal concerns, ensuring that businesses are well-positioned to thrive amidst evolving regulatory frameworks. Additionally, the conference will delve into the enhancement of data reliability and governance within the insurance sector. By exploring methods such as creating synthetic data and digital twins, the conference seeks to fortify the foundation for a successful integration of Gen-AI.pressat.co.uk, 4d ago

Latest

In response to the introduction of this new type of technology in healthcare, the CAR has set up a Radiology AI Validation Network (RAIVN). This assembly consists of AI specialists in the field of radiology tasked with assisting with post-market assessment of AI applications. As a resource, RAIVN would serve as the national body responsible for evaluating the performance of these technologies and pre-identifying any potential issues that may affect patient care. While this program is still in its infancy, we are hopeful its integration will be smoothly executed in the months ahead. We also believe that the RAIVN framework can and should be applied more generally to all AI based solutions in healthcare.Hospital News, 4d ago
...“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more sceptical than enthused. Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period. We did see some cybercriminals attempting to create malware or attack tools using LLMs, but the results were rudimentary and often met with scepticism from other users. In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us”, said Christopher Budd, director, X-Ops research, Sophos.TahawulTech.com, 4d ago
The article concludes by underscoring that while societal value may be generated in the AI sector, it doesn’t guarantee that individual companies will capture that value. Wang highlights that some AI-related value may benefit existing industry incumbents, creating a scenario where societal gains surpass individual company profits. The article suggests that only a select group of AI startups—those effectively generating and capturing value, potentially displacing incumbents—will experience substantial returns and gain recognition in the tech landscape. He expresses concern that, in the current investment climate, indiscriminate funding of AI startups may lead to wasted resources.TechStartups - Startups and Technology news, 4d ago

Top

The impact of artificial intelligence (AI) on people and talent acquisition (TA) is substantial. It can revolutionize how organizations recruit, manage, and develop their teams. Yet, like any transformative technology, it's crucial to ensure its design and use uphold the principles of fairness, transparency, and privacy. Failure to adhere to these principles can lead to legal penalties and damage to brand reputation. Most importantly, it can adversely affect the very people we aim to uplift with these innovations and rapidly accelerate the unfair practices we’ve worked so hard to amend. For HR and TA leaders, this presents an important responsibility — and opportunity: to guide the ethical implementation of AI within our organizations. Ignorance is not bliss; a lack of understanding about AI's ethical dimensions poses a risk that could compromise our ability to make informed and ethical decisions. Therefore, staying informed about ethical AI is not just a matter of staying current; it's a strategic imperative in the rapidly evolving landscape. In this exclusive virtual event, you will learn about such topics as:...hr.com, 12d ago
Introducing Gooey.AI, the revolutionary platform that empowers developers to unlock the full potential of artificial intelligence in an efficient and seamless manner. With Gooey.AI, they can effortlessly discover, customize, and deploy low-code AI recipes, utilizing the finest combination of private and open-source Generative AI. Crafted specifically for those skillful developers who thrive on speed and teams that demand tangible returns on investment, Gooey.AI is here to elevate their coding game to new heights. Immerse themself in a world where innovation meets simplicity. Gooey.AI offers an unparalleled range of pre-designed AI recipes, meticulously curated to cater to their unique project requirements. These low-code solutions enable them to dive straight into their coding endeavors, spending less time on arduous and mundane tasks, and more time on what truly matters: groundbreaking development. Customization lies at the very heart of Gooey.AI. We understand that no two projects are alike, and that's why they offer a robust suite of tools and features that allow them to mold the AI recipes to their desired specifications. Tailor every aspect of their AI model with precision, ensuring it aligns seamlessly with their project's vision and goals. Break free from the constraints of one-size-fits-all solutions and unlock the true potential of their creations. At Gooey.AI, they recognize the importance of collaboration within a team. This platform enables them to effortlessly synchronize with their fellow developers, empowering them to collectively innovate and achieve great results. Work together in perfect harmony, leveraging the power of AI to amplify their productivity and drive success forward. But what truly sets Gooey.AI apart is its unparalleled focus on delivering measurable returns on investment. We understand that investing in AI technologies is not simply a matter of novelty, but rather a strategic move to enhance their business outcomes. That's why every feature of Gooey.AI is meticulously designed to prove ROI, providing them with the confidence and assurance that their efforts are generating real-world value. Join the ranks of industry-leading professionals who have already embraced Gooey.AI to revolutionize their development workflow. With our platform, they will witness firsthand the seamless integration of artificial intelligence into their projects, empowering them to deliver unparalleled results with ease and efficiency. So, why wait? Unleash the power of Gooey.AI and embark on a journey of extraordinary innovation. Let Gooey.AI redefine what is possible in the world of AI coding, empowering them to traverse uncharted territories of creativity and productivity. Get ready to witness their projects come to life like never before. This coding revolution starts here, with Gooey.AI.saasworthy.com, 13d ago
The Executive Order issued by President Biden represents a significant shift in the way AI is regulated in America. For security teams at companies using AI, it presents a range of new challenges and opportunities. AI unlocks tremendous innovation, and it also requires security teams to adapt their systems and processes so they can secure the AI pipelines and protect against AI misconfigurations and vulnerabilities. By understanding the implications of these directives, security teams can ensure that their use of AI is not only secure but also ethical and compliant with the new standards.wiz.io, 12d ago
C: What are you most passionate about when it comes to data and analytics? What do you think is too often overlooked or misunderstood? ER: Unfortunately, the hype created around data has become our Kryptonite! Often, we are not able to deliver on the promises we make, with some recent studies indicating that close to 80% of data projects fail. I am still passionate about the potential data has to transform a business. I always tell my team that we need to work ourselves out of a job – that the use of data should become second nature to the business users, and it should be their first port of call when making decisions. Therein lies another fallacy about data-driven decision-making – that it completely discounts experience and intuition. This is a common misunderstanding. I am a big proponent of informed intuition, where data is used to either confirm or challenge your intuition. This approach always leads to better decision-making. I am also very passionate about driving data literacy with non-technical employees. Utilizing data as a tool within your tactical and strategic arsenal should not only be limited to those who have a knack for writing code. Cultivating a better understanding of how data can be used to make your work life easier is something I place a lot of emphasis on.C: What do you think are some of the biggest challenges facing data and analytics leaders today? And how do you think they can be overcome? ER: For the last couple of years, all organizations have been competing for a limited supply of data talent. As data and analytics leaders, we must change the way we recruit new talent into our organizations. We will have to be a lot more flexible in talent management which will be hedged on a strategy-driven, differentiated approach. This at the very least means that new talent should be recruited with the end in mind. A focus should be on what business requirements are, and only then match that with the human and technical skills needed to fulfill those requirements. Added to that, strategies such as outsourcing, offshoring, and retraining talent will become more important. Tech talent is also more likely to join organizations based on the work they will be doing. A proper career path will become critical allowing talent to build depth in multiple areas throughout their career. Finally, as AI continues to automate problem-solving, our focus should shift to talent that can guide AI technologies toward business results.C: In your experience, what does it take to be a successful leader in the data and analytics space? What characteristics or skills should aspiring data leaders focus on cultivating? ER: There is no simple answer to this. In my view, a successful leader in the data and analytics space is someone who can harmonize a combination of technical expertise, strategic thinking, business acumen, and very strong interpersonal skills. In particular aspiring data leaders should not only develop technical proficiency but also business understanding, communication skills, an ethical mindset, problem-solving, and analytical thinking. It also requires the ability to exhibit resilience and perseverance to achieve the functional goals set by the organization.coriniumintelligence.com, 13d ago
Noting similar benefits of custom AI tools in terms of data management, the University of Michigan also recently developed its own suite of generative AI tools with similar functions: U-M GPT, U-M Maizey and U-M GPT Toolkit. According to the university’s website, the most accessible level is U-M GPT, a free, university-hosted large language model that allows users to do chat-based queries, while U-M Maizey can analyze data sets input by students and faculty, and connect them to Google, Dropbox and Canvas. The website added that the U-M GPT Toolkit — the university’s most advanced option — gives users complete control over AI environments via access to an API gateway, which could be particularly useful for researchers and faculty with more technical knowledge of how AI and LLMs work.Echoing Kellen, University of Michigan’s Executive Director of Emerging Technology and Support Services Bob Jones agreed that on-prem AI tools like theirs allow universities to better secure data and protect privacy, rather than leaving those concerns largely up to third parties like OpenAI, the developer of ChatGPT. Noting concerns about how the use of tools like ChatGPT among students could widen the digital divide, he said the development of UM’s in-house AI also allowed the university to make AI tools more accessible to students, both in terms of cost and how they function.“We wanted to remove barriers [to using AI] for our community and beyond, and we focused on key capabilities that we thought were important,” he said. “Our data is private to the community. We do not pass any information along to the actual large language model for training or a change in the algorithm, which is sort of a key difference between us and OpenAI. Our environment is also accessible via screen readers, and we work with our accessibility team to ensure that it’s available to as many people as we can possibly make it.”...GovTech, 17d ago
Mike has always had a knack for working with complex machines and systems. He grew up on a farm where he’d take apart equipment and put it back together. “Understanding how things worked seemed important somehow,” he tells me. Throughout his professional career, it’s been more important than ever.At Microsoft, he worked on a breadth of projects, including leadership roles for their Flight Simulator, Streetside imagery operations and the Bing Maps Data Platform. “I spent about 15 years working on maps and geospatial problems in one way or another. I discovered I really enjoyed the challenge of making maps,” he saysThe map leads to TomTom“I decided if I want to make maps, I needed to join the world’s best mapmaking company,” he says. Mike is now the Director of Product Management for three separate yet connected areas of TomTom Maps, all contributing to TomTom mapping the world in real time.It’s going well so far. Mike says, “What’s impressed me has been TomTom’ers’ willingness to adopt new approaches and throw their effort behind a single vision of building the real-time map. We’ve demonstrated we can take on and accomplish big, ambitious goals by getting TomTom Orbis Maps to General Availability status. It’s exciting to think about the next set of big challenges we will take on.”The future of product managementWithin his role, Mike has his own exciting vision. “In the future, product managers will need to be more adept at dealing with data, metrics and generating insights to answer those questions. AI will play a greater role in this,” he says. “PMs will need to become proficient at shaping the AI’s responses, something called ‘prompt engineering,’ to ensure effective, unbiased results. This is particularly exciting because of how it will enable PMs to harness the breadth and depth of knowledge on the web via powerful large language models in ways not possible before in human history.”Being a mapmaker allows Mike to combine complex problems and innovative tech with real-world solutions that make an impact. As he says:...TomTom, 12d ago

Latest

Moshe mentioned the use of AI from a TSMC perspective and said there are two areas that have been publicly discussed. The first is the creation of standard cell libraries where the process has become so complex that it’s not possible for humans to understand all the design rules. A few examples are fed in and then the entire cell library can be built through the tools. It’s very complex with a lot of corner cases, and the tool generates higher quality and density libraries. The second is the use of moving analog IP to newer technology nodes. Moshe said it fits Ivo’s analogy of using an airplane in that it gets you closer to your goal, but still needs some human intervention for cleanup to reach the final product. Moshe senses that we are approaching a threshold and that once we cross it, it will be like a dam bursting with new capabilities.Semiconductor Engineering, 4d ago
In the last few years Large Language Models (LLMs) have risen to prominence as outstanding tools capable of understanding, generating and manipulating text with unprecedented proficiency. Their potential applications span from conversational agents to content generation and information retrieval, holding the promise of revolutionizing all industries. However, harnessing this potential while ensuring the responsible and effective use of these models hinges on the critical process of LLM evaluation. An evaluation is a task used to measure the quality and responsibility of output of an LLM or generative AI service. Evaluating LLMs is not only motivated by the desire to understand a model performance but also by the need to implement responsible AI and by the need to mitigate the risk of providing misinformation or biased content and to minimize the generation of harmful, unsafe, malicious and unethical content. Furthermore, evaluating LLMs can also help mitigating security risks, particularly in the context of prompt data tampering. For LLM-based applications, it is crucial to identify vulnerabilities and implement safeguards that protect against potential breaches and unauthorized manipulations of data.CoinGenius, 4d ago
...“The bench and bar have created and enforced a comprehensive system of ethical rules and regulation. In many respects, it is a unique and laudable system for regulating and guiding lawyers, and it has taken incremental measures to account for the wave of new technology involved in the practice of law. But it is not ready for the future. It rests on an assumption that humans will practice law. Although humans might tinker at the margins, review work product, or serve some other useful purposes, they likely will not be the ones doing most of the legal work in the future. Instead, AI counsel will be serving the public. For the system of ethical regulation to serve its core functions in the future, it needs to incorporate and regulate AI counsel. This will necessitate, among other things, bringing on new disciplines in the drafting of ethical guidelines and in the disciplinary process, along with a careful review and update of the ethical rules as applied to AI practicing law”...bespacific.com, 4d ago
In the digital landscape, the imperative to balance automation with a human touch is paramount for fostering authentic connections. This section delves into the significance of humanizing digital interactions, emphasizing the role of AI-powered chatbots equipped with advanced natural language processing (NLP). These intelligent chatbots transcend mere automation, simulating human-like conversations and responses. By striking a delicate equilibrium between efficiency and genuine connection, businesses can elevate customer experiences. The nuanced approach outlined in this section explores how leveraging AI technologies enhances engagement, ensuring that customers feel heard and understood, even in the realm of digital interactions.customerthink.com, 4d ago
Generative AI models are great at emulating a variety of personas and sticking to them. With the application of proper prompting techniques, the focus or behavior of the model can be directed to take on a particular bias. From there, a model can evaluate a variety of risk scenarios by emulating multiple personas, providing insight with different perspectives. By using a number of perspectives, Generative AI can be leveraged to provide thorough risk assessments and are much more capable of being neutral evaluators (via persona emulation) than a human would be. One can debate a model with an opposing persona and ensure that scenarios being evaluated are thoroughly red teamed.The Hacker News, 4d ago
Measuring purchasing power eliminates these distortions, which is why nobody measures purchasing power: once we calculate costs in terms of hours worked, we recognize that a much larger percentage of our labor / earnings is devoted to paying for essentials. Simply put, we're getting less value for our labor.Pundits tend to overlook the fundamental sources of declining purchasing power. These include:1. Decay of gains reaped from globalization. Stripped of corporate PR, globalization is the ruthless exploitation of as-yet unexploited pools of cheap labor and resources. This exploitation yields enormous gains at first and then these gains decay as wages rise and the easy-to-get resources are depleted.The dependence on foreign sources for essentials has also been revealed as a national security threat, and so the catch-phrase is "de-risking," which means developing multiple sources of essentials.2. Capital demanding higher returns due to soaring global risks. In the conventional view, the Federal Reserve chair waves a magic wand and lowers interest rates at will. It's not quite that simple. All new debt--for example, Treasury bonds--must be purchased by capital, and if risks are rising, capital demands a risk premium to offset the known unknowns and the unknown unknowns, both of which are proliferating rapidly.If capital is no longer willing to accept low yields, yields have to rise regardless of central bank policy, and this drags interest rates higher. Yes, central banks can create currency out of thin air and use this free money to buy Treasury bonds, but ballooning the money supply has its own consequences:3. Increasing the money supply to maintain a sclerotic, unproductive status quo generates a decline in the purchasing power of currency. Throwing trillions of new units of currency around doesn't magically mean production of goods and services increase, or the quality and quantity of items increase. It just diminishes the value of existing units of currency.4. Global scarcities crimp supply, pushing up costs. Humans have a very high opinion of themselves, but fundamentally we're like rabbits (or rats, if you prefer) let loose on an island without predators. Like rabbits, we proliferate and consume more per rabbit until the resources have been consumed. Then we wonder why scarcities arise. But AI, blah-blah-blah. AI can't restore depleted soil or reverse droughts.5. Soaring entitlements must be paid for with higher taxes. Promises made decades ago in different conditions require ever greater resources must be skimmed by governments. Creating money out of thin air isn't a solution (see #3 above) and so the government must collect a greater share of income and wealth. The more taxes we pay, the less we have left to spend on essentials and discretionary purchases.This is a global dynamic. Global entitlements and debt are both soaring.substack.com, 4d ago

Top

This is a great article that I would like to see go further with respect to both people and AGI.With respect to people, it seems to me that, once we assume intent, we build on that error by then assuming the stability of that intent (because peoples intents tend to be fairly stable) which then causes us to feel shock when that intent suddenly changes. We might then see this as intentional deceit and wander ever further from the truth - that it was only an unconscious whim in the first place.Regarding AGI, this is linked to unwarranted anthropomorpism, again leading to unwarranted assumptions of stability. In this case the problem appears to be that we really cannot think like a machine. For an AGI, at least based on current understandings, there are, objectively, more or less stable goals, but our judgement of that stability is not well founded. For current AI, it does not even make sense to talk about the strength of a "preference" or an "intent" except as an observed statistical phenomenon. From a software point of view, the future value of two possible actions are calculated and one number is bigger than the other. There is no difference, in the decision making process, between a difference of 1,000,000 and 0.000001, in either case the action with the larger value will be pursued. Unlike a human, an AI will never perfrom an action halfheartedly.lesswrong.com, 28d ago
Another key consideration that could be critical to a successful transition to a value-based pricing model is deciding who will lead this effort organizationally. So far, from what I’m hearing, AI is being folded into the remit of the Chief Technology Officer or into the scope of IT departments. Because of the emerging legal and compliance challenges associated with AI, we can expect the drumbeat for a single “Head of AI” to get louder. Whoever it may be, it is critical that they are brought to the table in early discussions with the client to ensure technology requirements are well-documented, considered and priced appropriately.advertisingweek.com, 5d ago
Limited expert knowledge: AI algorithms may have limitations in accessing and processing expert knowledge in specific subject areas, leading to potential gaps or inaccuracies in the training content. Without a comprehensive understanding of the expertise and insights provided by human trainers, AI tools may struggle to offer in-depth and accurate training content. It’s crucial to acknowledge the limitations of AI in terms of accessing expert knowledge and to involve human experts in the curation and validation of training content. By combining the expertise of human trainers with the capabilities of AI, organizations can ensure that the training materials are up-to-date and well-informed.Training Industry, 28d ago