It has come to my attention some of the students were following the links from blackboard and not on this site. To rectify this mistake, I have pushed the deadline by a week. Additionally, I have added a short descriptions links and short video on how to use and evaluate the interfaces below.
You are tasked with creating an application that uses Large Language Models to satisfy certain interactive objectives you specify.
The Objectives are outlined by you and are evaluated by your peers and course staff.
Your tasks include:
Assignment Lessons:
As we had done in the class; we have four LLMs setup in different ways to help you get started organised in steps to allow you to focus on each of the objects.
(REQUIRES SUBMISSION - Portal Coming Soon)Prepare a testing “JSON/TXT” file that you could use to submit to each LLM and get responses for and save the response. Link An explanation video is given above
Blackboard link: Link
We will upload demonstrative videos for the sections below.
This section is not graded. But it is a good way to tie everything for those who are interested in seeing how everything comes together.
The things we have covered till now present us with
In this part we will try to see how we can call use LLMs to call those desparate AI so that they could do something on behalf of vairous inputs for using an illustrative example from your submissions and our own examples.
We show an exammple of LLM calling external tools based on our conversation to generate an audio, image. The overall example here is “audio”/”video” could be any functional representation.
All up till now, we built a chatbot and the primary model of interaction was text. But using the similar AI frameworks we discussed on Image/Video tutorails we could replace the modality of interaction.
In this section, we will use Whisper and CoqTTS to use audio as a modality of interaction.
In this part, we will go to use creative engines like Unreal Engine / HeyGen and similar tools like we have explored in the class to drive avatars and see the experience of using AI in various applications.
You could of course use other LLM solutions as well. If you are interested in implementing your submission for part 1 using this options, you can reach out to us and we can suggest some options of bringing it to reality after your examinations.
Would further continue on 5/6/7 to integrate Audio/Video/Avatar/Application integrations using the tutorials given previously we will see your initial submissions to provide appropriate platforms for this expressions.
For instance:
Project 1: Objective: Friendly Robot/Avatar that can have a conversation with you.
You for instance could use HeyGen to create an avatar and use LLM to have a conversation with you.
You could also use Unreal Engine to create a more interactive game character that you could use to have a conversation with. Unreal Engine
Project 2: Objective: Plant Cultivation and Stress Reliefer
We could choose to design a chatbot that guides plant cultivation to relieve stress among young people, you could use plant cultivation guidance as a medium to address emotional stress. This choice could be based on the healing properties of plants and the bonding effect of sharedcultivation, facilitating user engagement with the chatbot.
Project 3: Objective: Customer Support Robot
We could choose to design a chatbot that answers questions about a company/product with focus given some information.