In my nano tips series on ChatGPT so far, I’ve covered Data Storytelling and Visualization, and Technical Prompts. Since “ChatGPT can now see, hear, and speak”, I thought it would be a good idea to create some additional videos to explain those multimodal capabilities.
In this course, you will learn how to configure advanced data analysis in ChatGPT, execute multistep document interpretation, join multiple tables with prompts, and analyze data using linear regression.
Along the way, I will demonstrate a handful of skills useful for designers such as how to extract and organize text from images, code an app element from an image, create UX design annotations, conduct text-to-image experimentation with DALL-E, and much more.
Also, by the end of the course, you’ll be prepared to browse the internet with Bing and utilize AI-enhanced voice conversations and commands.
Here is a quick sneak peek into my favorite video from the course:
Extract and organize text from images from Nano Tips for Navigating Advanced Data Analysis, Vision, and Voice in ChatGPT with Lachezar Arabadzhiev by Lachezar Arabadzhiev
If you have any questions, feel free to do one of the following:
🎥 Explore my LinkedIn Learning courses on storytelling through data and design!
📰 Follow me on LinkedIn, and click the 🔔 at the top of my profile page to stay up to date with my latest content!
✅ Subscribe to receive the by-weekly “My Next Story Is…” newsletter. Yep, it is my brand new newsletter!
Businesses and individuals everywhere are rethinking their content strategies thanks to the rapid advancement of…
I am sharing these links for all the learners, who are part of my new…
Developing an app idea can be cumbersome, especially for those without a coding background. From…
Yep, I know that by this point, you have probably tried Google's Gemini and experimented…
After covering ChatGPT and testing Zapier AI Actions, I thought I will explore Microsoft Copilot…
I've used Zapier for years to automate small tasks and create the lead generation system…