IDEA
-- played a crucial role in shaping the early development concept. Inspired by a well-known AI integration project, it emerged as an experimental phase where I began exploring how to bring ideas to life through code. The system works by assigning distinct roles to different AI agents, allowing them to engage in human-like dialogue and collaborative reasoning. This approach enables a single question to be dissected and analyzed from multiple perspectives, blending anthropomorphic traits with the depth of AI knowledge bases.
However, the current framework remains fairly rudimentary. It lacks memory storage capabilities on the website side, meaning that conclusions are typically one-off and not retained. Additionally, the prompt design for each AI role is still underdeveloped and requires further refinement to fully realize the system’s potential.
DEVELOP PLAN
The entire project is heavily reliant on ChatGPT, as I am still a beginner in HTML and CSS. Building a fully functional website requires far more than just front-end skills—it often involves JavaScript and elements of backend development. Creating interactive web experiences is technically complex, and there's a clear gap between designing a website and successfully implementing that design in code.
At this stage, the visual design of the MULTIAGENT-TALK module is still underdeveloped and will need refinement in the future. As the scale of the site grows, it’s likely that consolidating style elements and unifying the visual system will also become necessary.
HOW AI
CHANGE MY
WORK FLOW
This method had a key advantage—it allowed me to practice HTML and CSS while ChatGPT generated the code. Although still a time-consuming process, it was far more approachable for a beginner like me. However, it required a foundational understanding of HTML in order to properly interpret and refine the AI-generated output.
Additionally, I used nesting techniques to reduce repetitive work. For example, the current page reuses and embeds content from an existing Cargo website, which saved roughly 50% of development time. That said, this method does raise practical concerns, such as cross-origin limitations and increased memory usage, which must be carefully managed.
As I continued using ChatGPT and became more familiar with the tools, I gradually integrated this AI-assisted workflow into my regular process—boosting my efficiency by roughly 40%. My current approach to building a website follows these steps:
-
Establish the core layout framework – I first define the basic structure and then adjust it according to the content needs of each section.
-
Focus on the main content blocks – Based on specific requirements, I prompt ChatGPT to generate tailored layout and style code for each part.
-
Add interactivity – Through site testing, I identify interactive elements I want to implement, such as hover effects, click actions, or loading animations, and use GPT to generate interaction scripts accordingly.
-
Test, debug, and refine – I then go through rounds of debugging and visual adjustments to ensure smooth performance and design coherence.
Typically, building sections that require specific functionalities takes significantly more time. It often involves multiple rounds of in-depth dialogue and debugging with GPT before arriving at a workable solution. Interaction design, in particular, demands careful consideration and creative thinking.
Looking ahead, enhancing the visual appeal of the website is a key focus in my development plan. One of the main directions I intend to explore is incorporating SVG-based animations to enrich interactive experiences and bring more dynamic, visually engaging elements to the interface.
THE
FLOW
-- functions as the team's
arranger, characterized by a cool,
self-contained demeanor and a few unintentionally humorous catchphrases. As the team's interactive AI, he contributes professional insights when appropriate, supporting the creative process with precision and a touch of personality.
ABOUT
The inspiration for "LOOP" came from revisiting and rewriting a previously abandoned project idea. Initially, I planned to embed a model capable of basic interactive actions on the main website interface, complemented by typographic elements. However, the early implementations frequently encountered bugs that disrupted normal interactions. During this process, I discovered Spline's AI voice interaction feature, which led me to shift the development direction toward a model-centered website—reducing layout-related issues caused by overlapping text. Integrating AI voice interaction brought the otherwise static model to life, which I found particularly intriguing.
TECHNICAL
IMPLEMENTATION
The project is built in four key stages:
-
Model Construction – This involves creating the main 3D model, setting up the scene, and positioning the camera.
-
Interactive Animation – Responsive interactions are added by assigning and transitioning between different "states" of the model to create smooth animated effects.
-
AI Integration – OpenAI's API is connected to the system, with character prompts written to define personality, memory, tone, and knowledge base. A corresponding voice-interactive animation is also developed.
-
Web Embedding – Finally, an HTML structure is built to embed the model, combining it seamlessly with the Spline-generated embed code.