When was the first moment you realized that AI had become a part of your daily life? Well, for me, working in the AI field, it’s common to stay updated on new technologies and trends. Frankly speaking, it doesn’t necessarily mean that AI directly affects my daily life. However, it was only last November, when ChatGPT was introduced, that I truly understood how the borderline between my life and AI technology had blurred. Even my parents, who had always found AI to be intimidating, started asking me about ChatGPT and how they could use it. They began freely asking questions to ChatGPT and getting answers from it.
I once gave my parents an AI speaker to familiarize them with AI technology. However, they only used it a few times, and the AI speaker never saw much action. When I asked about their reluctance to use it, they humorously responded with, “The AI speaker is so stupid that it doesn’t understand us.” Interestingly, they have now become quite adept at using ChatGPT! Seeing my parents effortlessly engage with ChatGPT made me realize that AI has indeed become a welcomed guest in our daily lives and to a broader
“Generative AI is just a phase. What’s next is Interactive AI.” — Mustafa Suleyman
Mustafa Suleyman, a co-founder of DeepMind, has expressed the view that Generative AI is just a phase, and the future lies in Interactive AI. This means that AI will evolve beyond one-way communication and engage in two-way interactions, thereby extending the scope of technology adoption both broadly and deeply. This pivotal moments marks a significant milestone in the history of technology, suggesting that AI will seamlessly integrate into our lives as a natural part of our daily life.
During the annual developer event of Intel — Intel Innovation 2023 — which was held on September 20th, Chief Technology Officer Greg Lavender unveiled a vision of a new world — one in which artificial intelligence signifies a generational transformation in computing. He emphasized, “AI can and should be accessible to everyone to deploy responsibility,” underscoring the crucial role of AI in our technological landscape.
Lavender then delved into how Intel’s developer-centric, open ecosystem philosophy is diligently working to ensure that the benefits of AI are within reach for all. For example, Intel has taken a meaningful step by becoming a founding member of the Linux Foundation’s Unified Acceleration Foundation(UXL), an open accelerator software ecosystem. By doing so, it highlights Intel’s commitment to cross-platform deployment, compatible with various computing architectures. Moreover, Intel remains dedicated to contributing to AI and ML tools and frameworks, such as PyTorch, to further advance the field and make AI accessible to a wider audience.
In the keynote, Intel showcased a demo where disabilities are transformed into digitally enhanced strengths through the use of sensing and AI-enabled PCs. Intel collaborated with Starkey Laboratories, a hearing aid manufacturer to connect a hearing aid to a Samsung Galaxybook, demonstrating a user scenario that illustrates how AI can enhance the hearing aid experience.
During the demo, the CEO Pat Gelsinger participated in a conference call. The PC demonstrated its “contextual awareness” by automatically switching the CEO’s hearing aids between “ambient aware” and “focus” modes. In “focus” mode, background noise was filtered out, but the computer remained attentive to external sounds, such as a knock at the door. A pop-up notification alerted the CEO to the knock, which he promptly dismissed. When he turned his attention to his colleagues, the hearing aids switched to “ambient aware” mode, allowing him to engage in a conversation. When he returned to the computer, it seamlessly caught him up. A “summarizer” recognized and condensed the missed portions of the meeting into text.
This demo provided a visual representation of how AI will integrate into our daily lives. Pat emphasized, “Our focus at Intel is to bring AI everywhere,” and the demo highlighted their strong commitment to making this vision a reality.
Cochl’s goal is to advance machine listening technology capable of comprehending all sounds in the world. Through our Sound AI technology, we consistently strive to enhance the everyday lives of humans. Cochl has already made significant trials in sound visualization with the development of Cochl.Sense and its collaboration with NVIDIA RIVA for use on Microsoft HoloLens 2. By integrating our technology into people’s daily lives, Cochl aims to broaden the spectrum of experiences available to individuals.
Cochl is fortunate to have a remarkable opportunity to bring AI into our daily lives. Thanks to our collaboration with Intel and the utilization of Intel’s 4th Gen Xeon Scalable processor and the OpenVINO toolkit, we’ve successfully enhanced Cochl.Sense’s performance and scalability, propelling it to new levels of capability. This partnership has the potential to unlock a future where sound isn’t just something we can ‘hear’, but also something we can ‘see’ and ‘feel’, opening up exciting possibilities.
The journey of bring AI everywhere is only just beginning, and it is set to advance at an unprecedented pace. We invite you to look forward to the future that Cochl will create!