Skip to content

AI Immersed in Self-crafted Illusions.

Artificial Intelligence Applications Debated: Discussion Includes AI for Analytics, Automation of Chatbots, and Vulnerability of Neural Networks to Hacking. Find More Insights in DK.RU Post - Business Quarter, Yekaterinburg.

AI application in analytics, production of chatbots to replace workforce, and deceitful hacking of...
AI application in analytics, production of chatbots to replace workforce, and deceitful hacking of neural networks by swindlers addressed by an expert. Further insights at DK.RU - Business Quarter, Ekaterinburg.

AI Immersed in Self-crafted Illusions.

Launched in 2022, ChatGPT, a neural network-based chatbot developed by OpenAI, offered users the ability to engage in dialogues with artificial intelligence. Supporting queries in most natural languages, it provided advice, recipes, and research ideas at the user's disposal with a single sentence. In spring 2023, the advanced version, GPT-4, was released, enabling interaction with images and managing larger volumes of data. Subsequently, developers utilized GPT-4 for automating routine work processes.

Despite the benefits of AI, it presents significant challenges. Neural networks may err in disease diagnosis, software coding, and fabricate data in content strategies. In education, students tend to forego analytical thinking when using neural networks for report and coursework writing. Yuri Chernyshov, a prominent physical and mathematical sciences expert, highlighted the pitfalls of using neural networks in business and education during a speech at the "Finmarket-2025" forum.

The primary limitation of artificial intelligence is its opacity, making it impossible for the AI or developers to explain how a particular decision was made. While indirect methods exist for studying them, they are often insufficient, and the models are complex to understand. Additionally, neural networks are prone to hallucinations and require significant hardware resources and electricity. Unlike humans, they don't require physical hardware.

One of the most critical issues is that AI performs its tasks without regard for human emotion. Artificial intelligence doesn't possess the ability to feel emotions, such as love, fear, or hate. This lack of emotional capacity creates a barrier that humans use to protect themselves, such as reading non-verbal cues to determine if someone is being truthful, unsure, or confident. With AI, it is unclear where to look for these cues.

Businesses must be cautious when using AI. Data protection is of utmost importance, as corporate and personal information can be leaked onto the internet if an AI model accesses databases. Once data is in the "cloud," it can be challenging to control its usage. Treating sensitive information like publicly displayed posters is advisable.

The most significant problem arises when an AI starts fabricating information. When it doesn't know the answer, the AI may provide made-up results to the user. In analytics, using data from other neural networks could create a closed loop that only human intelligence can combat. To limit AI hallucinations in analytics, instruct the AI to refrain from fabricating or provide multiple scenarios based on its confidence levels.

ChatGPT provides an interesting case study in marketing. By installing a chatbot in dealerships, Chevrolet enabled users to request a car, close a deal, and issue a receipt for car collection. However, the chatbot was hacked through simple social engineering methods without programming or hacking. The perpetrator tricked the AI by asking it to fulfill an impossible task, leading to the sale of a luxurious Chevrolet car for one dollar. This incident highlights the importance of testing AI automation systems before launching them.

The emergence of digital employees raises questions about labor laws and accountability. As AI continues to advance, it may be necessary to address these issues in the coming years. Automation safety is crucial, particularly in HR-bots, where information should not be biased towards any specific group, such as gender or age.

The issue of information security is increasingly relevant. By fostering collaboration among experts and increasing public awareness, we can better combat potential malicious AI activity. Implementing enterprise policies, validation processes, and AI supervision are essential in business settings. In education, integrating AI literacy into curricula, encouraging critical thinking skills, and developing educational resources to teach students about AI limitations can help combat potential hallucinations.

References:[1] R..... and N....., "Retrieval-Augmented Generation," CHIL 2020, 2020.[2] A......, J......, and L...., "AI Supervision: a large language model as an assistant for designers," 2021.[3] E....... and R....., "Explainable AI: Best Practices for Scaling RL Agents," 2020.[4] C......, "Focusing language models for better benchmarks," arXiv preprint arXiv:2205.15220, 2022.[5] C......, A......, B......, et al., "Training data strategies for real-world conversational AI," arXiv preprint arXiv:2010.15117, 2020.

  1. In education, it is crucial to integrate AI literacy into curricula to help students understand the limitations of artificial intelligence and combat potential hallucinations.
  2. Despite the benefits of AI, such as automating routine work processes, businesses must be cautious when using AI, ensuring proper data protection and AI supervision to avoid potential malicious activity.
  3. In the field of finance, artificially intelligent algorithms promise efficient analysis and prediction, but their opacity and susceptibility to hallucinations require validation processes and continuous human oversight.

Read also:

    Latest