GenAI - Anima Blog https://www.animaapp.com/blog/genai/ Thu, 01 Aug 2024 13:28:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Minimizing LLM latency in code generation https://www.animaapp.com/blog/frontier/minimizing-llm-latency-in-code-generation/ https://www.animaapp.com/blog/frontier/minimizing-llm-latency-in-code-generation/#respond Thu, 01 Aug 2024 09:05:33 +0000 https://www.animaapp.com/blog/?p=10119 Reading Time: 2 minutes Discover how Frontier optimizes front-end code generation with advanced LLM techniques. Explore our solutions for balancing speed and quality, handling code isolation, overcoming browser limitations, and implementing micro-caching for efficient performance.

The post Minimizing LLM latency in code generation appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

Optimizing Frontier’s Code Generation for Speed and Quality

Introduction

Creating Frontier, our generative front-end coding assistant, posed a significant challenge. Developers demand both fast response times and high-quality code from AI code generators. This dual requirement necessitates using the “smartest” language models (LLMs), which are often slower. While GPT-4 turbo is faster than GPT-4, it doesn’t meet our specific needs for generating TypeScript and JavaScript code snippets.

Challenges

  1. Balancing Speed and Intelligence:

    • Developers expect rapid responses, but achieving high-quality code requires more advanced LLMs, typically slower in processing.
  2. Code Isolation and Assembly:

    • We need to generate numerous code snippets while keeping them isolated. This helps us identify each snippet’s purpose and manage their imports and integration.
  3. Browser Limitations:

    • Operating from a browser environment introduces challenges in parallelizing network requests, as Chromium browsers restrict the number of concurrent fetches.

Solutions

To address these challenges, we implemented a batching system and optimized LLM latency. Here’s how:

Batching System

  1. Request Collection:

    • We gather as many snippet requests as possible and batch them together.
  2. Microservice Architecture:

    • These batches are sent to a microservice that authenticates and isolates the front-end code from the LLM, ensuring secure and efficient processing.
  3. Parallel Request Handling:

    • The microservice disassembles the batch into individual requests, processes them through our regular Retrieval-Augmented Generation (RAG), multi-shot, and prompt template mechanisms, and issues them in parallel to the LLM.
  4. Validation and Retries:

    • Each response is analyzed and validated via a guardrail system. If a response is invalid or absent, the LLM is prompted again. Unsuccessful requests are retried, and valid snippets are eventually batched and returned to the front end.

Micro-Caching

We implemented micro-caching to enhance efficiency further. By hashing each request and storing responses, we can quickly reference and reuse previously generated snippets or batches. This reduces the load on the LLM and speeds up response times.

Conclusion

The impact of parallelization and micro-caching is substantial, allowing us to use a more intelligent LLM without sacrificing performance. Despite slower individual response times, the combination of smart batching and caching compensates for this, delivering high-quality, rapid code generation.

The post Minimizing LLM latency in code generation appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/frontier/minimizing-llm-latency-in-code-generation/feed/ 0
Guard rails for LLMs https://www.animaapp.com/blog/genai/guard-rails-for-llms/ https://www.animaapp.com/blog/genai/guard-rails-for-llms/#respond Thu, 04 Jul 2024 15:23:43 +0000 https://www.animaapp.com/blog/?p=9989 Reading Time: 3 minutes The conclusion is that you cannot ignore hallucinations. They are an inherent part of LLMs and require dedicated code to overcome. In our case, we provide the user with a way to provide even more context to the LLM, in which case we explicitly ask it to be more creative in its responses. This is an opt-in solution for users and often generates better placeholder code for components based on existing usage patterns.

The post Guard rails for LLMs appeared first on Anima Blog.

]]>
Reading Time: 3 minutes

Implementing Guard Rails for LLMs

Large Language Models (LLMs) have made a profound leap over the last few years, and with each iteration, companies like OpenAI, Meta, Anthropic and Mistral have been leapfrogging one another in general usability and, more recently, with the ability of these AI models to produce useful code. One of the critical challenges in using LLMs, is ensuring the output is reliable and functional. This is where guard rails for LLM become crucial.

Challenges in Code Generation with LLMs

However, as they are trained on a wide variety of code techniques, libraries and frameworks, trying to get them to produce a unique piece of code that would run as expected is still quite hard. Our first attempt at this was with our Anima Figma plugin, which has multiple AI features. In some cases, we intended to expand our ability to address new language variations and new styling mechanisms without having to create inefficient heuristic conversions that would be simply unscalable. Additionally, we wanted users to personalize the code we produce and have the capability of adding state, logic and more capabilities to the code that we produce from Figma designs. This proved much more difficult than originally anticipated. LLMs hallucinate, a lot.

Fine-tuning helps, but only to some degree – it reinforces languages, frameworks, and techniques that the LLM is already familiar with, but that doesn’t mean that the LLM won’t suddenly turn “lazy” (putting comments with /* todo */ instructions rather than implementing or even repeating the code that we wanted to mutate or augment). It’s also difficult to avoid just plain hallucinations where the LLM invents its own instructions and alters the developer’s original intent.

But as the industry progresses, LLM laziness goes up and down and we can use techniques like multishot and emotional blackmail to ensure that the LLM sticks to the original plan. But in our case, we are measured by how well the code we produce is usable and visually represents the original design. We had to create a build tool that evaluated the differences and fed any build and visual errors back to the LLM. If the LLM hallucinates a file or instructions, the build process catches it and the error is fed back to the LLM to correct, just like a normal loop” that a human developer would implement. By setting this as a target, we could also measure how well we optimized our prompt engineering, Retrieval-Augmented Generation (RAG) operations and which model is ideally suited for each task.

 

Strategies for Implementing Guard Rails

 
This problem arose again when we approached our newest offering: Frontier, the VSCode Extension which utilizes your design system and code components when it converts Figma designs to code.
In this case, a single code segment could have multiple code implementations that could take in additional code sections as child components or props, yielding the need for much tighter guardrails for the LLM. Not only do we need to use all the previous tools, but we also need to validate the results it produced are valid code. This needed to happen very quickly, which meant that a “self-healing” approach wouldn’t work. Instead, we are able to identify props and values using the existing codebase, combined with parsing the Typescript of the generated code to ensure that it makes sense and is valid code against the code component that we have chosen to embed in a particular area in the code base. Interestingly, despite the LLMs generating very small function call and getting a fair amount of context and multi-shot examples, they do hallucinate more often than expected. Fine-tuning might help with that, but we assumed that this is an inherent piece of the technology and requires tight guardrails.
 
That means that for each reply from the LLM we first validate that it’s a valid response, and if it is invalid we will explain to the LLM what’s wrong with it and ask it to correct. In our experience a single retry shot often does the trick and if it fails, it will likely fail in subsequent rounds. Once an initial validation is passed we actually go through the reply and validate that it makes sense, we have a few simple validation heuristics that improve the success rate dramatically. 
 

Conclusion: The Necessity of Guard Rails for LLMs

Hallucinations are an inherent challenge with LLMs, that cannot be ignored. They are an inherent part of LLMs and require dedicated code to overcome. In our case, we provide the user with a way to provide even more context to the LLM, in which case we explicitly ask it to be more creative in its responses. This is an opt-in solution for users and often generates better placeholder code for components based on existing usage patterns. Interestingly, when we apply this to component libraries that the LLM was trained upon (MUI, for example, is quite popular) the hallucinations increase as the LLM has prior bias towards those component implementations and the guard rails are particularly useful there.
 
Start using Frontier for free and experience the benefits of robust guard rails for LLM in your code generation process.

The post Guard rails for LLMs appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/guard-rails-for-llms/feed/ 0
Introducing Frontier’s New Feature: Code Injection https://www.animaapp.com/blog/product-updates/introducing-frontiers-new-feature-code-injection/ https://www.animaapp.com/blog/product-updates/introducing-frontiers-new-feature-code-injection/#respond Thu, 25 Jul 2024 06:58:19 +0000 https://www.animaapp.com/blog/?p=10078 Reading Time: 2 minutes This feature enhances your ability to seamlessly integrate generated code from Figma into your existing projects, saving time and reducing the need for manual copy-pasting.

The post Introducing Frontier’s New Feature: Code Injection appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

We are excited to announce the release of a powerful new feature in Frontier: Code Injection. This feature enhances your ability to seamlessly integrate generated code from Figma into your existing projects, saving time and reducing the need for manual copy-pasting.

Why Did We Create Code Injection? 🤔

  1. We noticed that many of our users were exporting only parts of the code from Figma, often leading to broken implementations. A complete component needs all its pieces— index (TSX or JSX), CSS, assets, and the right styleguide references—to work properly.
  2. We heard from you that manually copying and pasting each file was quite tedious. Downloading assets from one place and uploading them to another? Yawn! 😴

We knew there had to be a better way. Enter Code Injection. We developed this feature to streamline your workflow, making the process of integrating design into development as seamless as possible.

How Does It Work? 🛠

Example Scenario: Implementing a Subscribe Modal Component

The Figma Design:

Figma design example
You open the Figma design and see that it includes:

  • A few input fields (that you already have in your code ✅ – <Input>)
  • A submit button (that you haven’t created in code yet ⭕)
  • A checkbox (that you haven’t created in code yet ⭕)
  • Some text and an icon (non-component elements)

1. Provide your design to Frontier in VScode

  1. Paste the Figma link
  2. Select the Modal component
  3. Click “Inject component”

 

2. The Injection magic:

  1. Frontier will detect that you already have an <input> component, but missing the <button> and <checkbox> components.
  2. Frontier will generate and inject the <button> and <checkbox> components to your source code, with all the necessary folders and files (e.g., tsx, CSS, assets).
  3. Frontier will build a <Modal> component:
    1. Components: imports your existing <input> component and the newly generated <button> and <checkbox> components
    2. Non-Component Elements: Frontier includes inline code for simple elements like text and icons directly within the generated component.

 

Code example

Here’s how the code for a “Modal” component might look after using Code Injection:Code inject example

Get Started 🚀

Try out the new Code Injection feature today and streamline your design-to-code workflow with Frontier! Your feedback is crucial as we continue to enhance Frontier’s capabilities.

Why Use Code Injection? 🌟

  • Efficiency: Automatically generate and integrate components directly into your project, reducing manual coding effort.
  • All-in-One: Generate your component with all its necessary files and assets in one click, streamlining your workflow.

Feel free to reach out if you have any questions or need assistance. We’re here to support your journey to more efficient and consistent coding!

Happy coding! ✨

Get Frontier

The post Introducing Frontier’s New Feature: Code Injection appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/product-updates/introducing-frontiers-new-feature-code-injection/feed/ 0
Pluggable design system – Figma to your design system code https://www.animaapp.com/blog/genai/pluggable-design-system-figma-to-your-design-system-code/ https://www.animaapp.com/blog/genai/pluggable-design-system-figma-to-your-design-system-code/#respond Tue, 02 Jul 2024 14:35:35 +0000 https://www.animaapp.com/blog/?p=10001 Reading Time: 3 minutes When we created Frontier, we didn’t want to stick to just one coding design system. MUI, for example, is a very popular React Design System, but it’s one of many design systems that are rising and falling. Ant Design is still extremely popular, as is the TailwindCSS library. We’re seeing the rapid rise of Radix based component libraries like ShadCN as are Chakra and NextUI.

The post Pluggable design system – Figma to your design system code appeared first on Anima Blog.

]]>
Reading Time: 3 minutes

Design to code is a difficult problem to crack, there are so many variations to consider. On the Figma side, we have to consider auto layouts, design tokens, component sets, instances and Figma variables. On the code side, we have to assume that the codebase could contain both local and external components that could come from anywhere.

That’s why, when we created Frontier, we didn’t want to stick to just one coding design system. MUI, for example, is a very popular React Design System, but it’s one of <very> many design systems that are rising and falling. Ant Design is still extremely popular, as is the TailwindCSS library. We’re seeing the rapid rise of Radix-based component libraries like ShadCN as are Chakra and NextUI. However, we knew that if we wanted to reach a wide audience we could not rely on a limited subset of Design Systems, we had to create a “pluggable design system”.

Key Challenges in Implementing a Pluggable Design System

There are a few challenges to accomplishing this:

    1. Existing Project Integration:

      You have an existing project that already uses a design system. In this case, we are expected to scan the codebase, and understand and reuse the design system. We do this when Frontier starts, it looks through your codebase for local and external components (you can restrict where it actually scans and also control how deeply it looks at the code) for your code components and usages of those code components.

    2. Design and Code Component Mismatch:

      When we look at the Figma design, we don’t assume that the designer has a clear understanding of which component system will be utilized to implement the design. Typically, if this is an Enterprise with a Design System Team, the components in the design will match in design. Still, not necessarily in their name, variants nor have a 1:1 match between the Figma and code component counterparts. In fact, the same design could be used with different Design Systems code components and fully expected to match and work.

    3. Flexible Implementation:

      Once applied, components could have multiple ways to implement overrides and children:

      1. Props / variants
      2. Component children
      3. Named slots
    4. The “Cold start” problem

      Even if you solve scanning the project’s repo, what happens when you encounter a brand new project and want to use a new library with it? In this case, you would have zero code usage examples and zero components that you are aware of…

To overcome these problems we started with a few assumptions:

    1. Leverage Usage Examples:

      the project has a robust set of usage examples, we can take inspiration from them and understand how this particular project utilizes those components, which will help us solve the prop/overrides/children/named-slots issue.

    2. Custom Matching Model

      We had to create a custom model that understands how designers in design systems implement their components and how developers code the code components. This matching model was trained on a large set of open source Design System repos and open Figma design systems. It reached a surprisingly high matching rate on all our tests. Looks like many designers and many developers think in similar ways despite using very different conventions and actual designs.

    3. Cross-System Matching

      Once we were able to match within the same design system, the next challenge was to make the model more robust with matching across design systems – take a design that relies on AntD components and train the model to implement it using MUI components, or vice versa. This made the model much more versatile.

    4. Local Storage for Privacy and Security

      For security and privacy purposes, we have to encode and store our RAG embeddings database locally, on the user’s machine. This allows us to perform much of the work locally without having to send the user’s code to the cloud for processing.

       

Interestingly, the fact that we can store bits and pieces of this databases, also opens up possibilities with cold starts. An empty project can now easily state that it wants to use MUI and simply download and use the embeddings. That gives the usage LLMs all the context that’s needed to produce much more robust results, even when the codebase is completely empty from any actual context.

The result is that Frontier can now generate code components in projects, even if the Design System doesn’t actually match the code design library and even when the codebase is completely devoid of any actual examples.

The post Pluggable design system – Figma to your design system code appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/pluggable-design-system-figma-to-your-design-system-code/feed/ 0
Generative code: how Frontier solves the LLM Security and Privacy issues https://www.animaapp.com/blog/genai/generative-code-how-frontier-solves-the-security-and-privacy-issues/ https://www.animaapp.com/blog/genai/generative-code-how-frontier-solves-the-security-and-privacy-issues/#respond Wed, 05 Jun 2024 14:17:50 +0000 https://www.animaapp.com/blog/?p=9968 Reading Time: 3 minutes AI and LLM code generation typically suffer from Privacy and Security issues, particularly with Enterprise users. Frontier is a VSCode that generates code through LLMs, which uses local AI models in order to firewall the user's data and codebase from being exposed to the LLM. This unique approach isolates the codebase and ensures compliance and inter-developer cooperation without compromising the security of the code repo.

The post Generative code: how Frontier solves the LLM Security and Privacy issues appeared first on Anima Blog.

]]>
Reading Time: 3 minutes

When it comes to generative AI and LLMs, the first question we get is how we approach the security and privacy aspects of Frontier. This is a reasonable question given the copyright issues that many AI tools are plagued with. AI tools, after all, train on publicly available data and so could expose companies to potential copyright liability.

But it’s not just that, companies have invested heavily in their design language and design systems, which they would never want to expose externally and their code base is also a critical asset which they would never want to partake in LLM or AI training. 
 
When designing Frontier, privacy and security were foremost concerns from day one. First, it was clear to us that Frontier users cannot expose their codebase to anyone, including us. That means that much of the data processing had to take place on the user’s device, which is quite difficult given that we are running in a sandbox inside a VSCode Extension. Secondly, we needed to expose the minimum amount of data and design to the cloud. Additionally, any data that needed to be stored, had to be stored in such a way where it could be shared by multiple team members, but should not be stored on the cloud. Finally, none of our models could have any way to train from the user’s design or codebase.
The first part was isolating the Figma designs. By building a simplified data model, built in memory from within VSCode, using the user’s own credentials, we are effectively facilitating an isolated connection between the user and Figma APIs without us in between and without our servers even seeing a copy of the design.
 
The typical implementation used for generative code generation is to collect the entire code base, break it into segments, encode the segments into embeddings and storing them into a vector database. This approach is effective but won’t work well in our case, since storing this data on our servers would mean we are exposed to the data. In addition, the code base is continually evolving and would need to be reencoded and stored every so often, which would make this process slow and ineffective. 
 
Instead, our approach was to develop an in-memory embedding database, which can be stored and retrieved locally and rebuilds extremely quickly, even on large codebases. In order to secure this data, we store it on the user’s workspace, where it can be included in the git repository and shared between the users, or simply rebuilt per-user.
 
But this would be useless if we would have to send a large code sample to an LLM for each line of code we generate. Instead, we implemented a local model that runs in VSCode, so when we do need to use an LLM, we share the interface of the components instead of needing code. Users can improve the results by opting in to include some real-world usage examples of how Button is used in the codebase, sharing with the LLM a simplified thin code showing how Button component is used in the code base, but not how Button is implemented or what it actually looks like or does…
 
By limiting the amount of data and anonymizing it, we can guarantee that the LLM doesn’t get trained or store the user’s code in any way.
 
But how do we guarantee that data doesn’t get “leaked” from outside sources that the LLM trained on back into the codebase, exposing the company to potential copyright risk? First, we limit the type of code that the LLM can generate to specific component implementations, only after it passes a guard rail system. The LLM Guard rail validates the code makes sense, and can identify hallucinations that might invalidate the code or introduce copyright liability to the code base. If the code passes the guard rail system, we are extremely sure that the results correlate with what the user expects from the component code.
 
Finally, for full transparency, we store the data in open JSON files inside the .anima folder on your project’s workspace. Different workspaces would have different settings and components. Sharing this information between users can be done through git (or a shared file system of any kind), which eliminates Anima from being exposed to any of the cached data for components, usage or the entire codebase or Figma design data.

The post Generative code: how Frontier solves the LLM Security and Privacy issues appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/generative-code-how-frontier-solves-the-security-and-privacy-issues/feed/ 0
LLMs Don’t Get Front-end Code https://www.animaapp.com/blog/opinions/llms-dont-get-front-end-code/ https://www.animaapp.com/blog/opinions/llms-dont-get-front-end-code/#respond Tue, 28 May 2024 08:57:35 +0000 https://www.animaapp.com/blog/?p=9931 Reading Time: 3 minutes Ofer's piece delves into the evolving role of AI in front-end development, debunking myths about replacing human developers. Share your thoughts with us too!

The post LLMs Don’t Get Front-end Code appeared first on Anima Blog.

]]>
Reading Time: 3 minutes

LLMs Don’t Get Front-end Code

I see this pattern repeats every few months: A new multimodal LLM comes out, and someone on Twitter takes a screenshot of a game or app and provides it to the LLM, resulting in working code that actually runs. 
 
Hence the meme: Front End Developers, you will soon be replaced by AI…
 
After so many years of managing software, I should know better. The variation between each team and projects within each team are infinite. Each team uses a different combination of tools, frameworks, libraries, style of coding, CSS language/framework, all of which are constantly changing. Small startups will typically adopt a public Design System and adapt it to their needs, while larger companies would have their own customized Design System components through a dedicated team. Good luck asking an LLM to conform to these requirements, since it has zero context with that combination of tools and components.
 
So, good luck trying to get the LLM to code in your style, using your front-end components and have an in-depth understanding of design. At best, it can take a 2D image of your screens and make it do something… Turning that result into production code will likely take you longer than to start from scratch. 
 
More so, as the tools evolve, the level of complexity and thought that goes into these combinations make front-end Developers into professional problem solvers. They typically get an impossible Figma design, which they would have to fully understand, then negotiate changes with the designer, until they hopefully can adapt it to the design system. These are very human problems, and require human operators to drive them.

Enter: Useful generative coding

But LLMs are revolutionary and will make a huge impact on developers. Given the right context, AI can locate and correct bugs, help design the software, and turn developers into 10x individual contributors (10xIC). This is precisely what Github Copilot does: It learns from your project and given the huge amount of relevant context, it attempts to predict what you’re trying to accomplish and generate the code for that prediction. Developers get an efficiency boost using Copilot, but just one problem…
 
Copilot understands concepts like functionality, components, and state. It fundamentally does not understand design. Why would it? It has no context to the design that the front-end developer is using, so when you start creating React components, it will just give you boilerplate code that it most likely learned either from your project or from other designs. I often see it generating an endless round of meaningless HTML gibrish, it’s chance of actually succeeding to predict your design is infinity small. More so, match your particular components and giving you code that’s of value, that’s Sci-Fi…
 
That’s why many front-end designers either do not use Github copilot at all, or use it for everything apart from design. But what if you could extract context from the design? That’s where Anima Frontier comes in. Frontier has context to the Figma design, including deep understanding of the Figma components, overrides and Figma Design System, as well as your codebase and your design system code components. By matching those, and with the ability to generate scaffolding code based on the Designer’s specifications (and not a static snapshot of their design), the resulting code is a perfect code companion specifically made for front-end developers. It works together with Github Copilot to fill the void that is design.
 
We do not really think that Designers or Front End Developers are going away any time soon. We don’t think it’s realistic that they’ll be replaced by automated tools. Tools like Frontier are intended to work like Copilot – in making front-end development easier and more approachable. By providing context and assistance to the developer we can turn Front End developers more productive. This is exactly the type of tool I wish I had when I started coding – it’s the perfect way to extract the most from what the Designer has already embedded in the design, sometimes without even realizing it.

The post LLMs Don’t Get Front-end Code appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/opinions/llms-dont-get-front-end-code/feed/ 0
Does Frontier support NextJS? https://www.animaapp.com/blog/genai/does-frontier-support-nextjs/ https://www.animaapp.com/blog/genai/does-frontier-support-nextjs/#respond Fri, 21 Jun 2024 07:43:53 +0000 https://www.animaapp.com/blog/?p=9992 Reading Time: 2 minutes Short answer: Yes!
Frontier will generate client components by default, when it detects NextJS. This is done by adding the ‘use-client’ statement at the beginning of the component declaration. 

The post Does Frontier support NextJS? appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

Short answer: Yes!

Long answer:

NextJS is an extremely popular framework for ReactJS that provides quite a few benefits, one of which is the mix between server and client-side components. 

Server-only components are components that do not use/need state and can pull their data from external APIs without worrying about credentials falling into the wrong hands. They can only be rendered on the server. Server components may contain server and/or client components.

Client-only components are components that have the “use client” directive defined. A component that uses state and other React APIs needs to be a client component, but a client component doesn’t require state to function.

In Next.js, components are server components. This ensures that fully-formed HTML is sent to the user on page load. It’s up to the developer’s discretion to set the client boundaries. If components are not using state and are not making outward API calls, they can be implemented both as client and server, which is ideal. 

Since it can be quite complex to determine, which type of component a particular React component is (Server only, Client only, agnostic), Frontier will generate client components by default, when it detects NextJS. This is done by adding the ‘use-client’ statement at the beginning of the component declaration. 

This issue arises because it can be challenging to identify if the rendered component tree includes descendants that must be rendered on the client side. Without a ‘use client’ directive for those components, runtime errors may occur.

If you remove the ‘use-client’ and the code still builds with no errors, this means that the client boundaries have been set correctly, and you can let Next.js determine whether the component is rendered on the client or the server. If, on the other hand, removing it causes a build error, it means that one or more of the descendants uses client-only APIs, but hasn’t declared itself as a client component. In this case, you can add the ‘use-client’ statement in the code we’ve created, or add the directive directly inside of the offending descendant.

So, what’s the bottom line?

Short answer: Yes, Frontier supports NextJS!

Start here!

The post Does Frontier support NextJS? appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/does-frontier-support-nextjs/feed/ 0
Introducing: Frontier, the future of front-end by Anima https://www.animaapp.com/blog/genai/introducing-frontier-the-future-of-front-end-by-anima/ https://www.animaapp.com/blog/genai/introducing-frontier-the-future-of-front-end-by-anima/#respond Thu, 16 May 2024 11:22:49 +0000 https://www.animaapp.com/blog/?p=9863 Reading Time: 4 minutes Frontier is the first AI coding assistant tailored for Front-end development.
It quickly analyzes your entire codebase, mapping your design system, frameworks, conventions, and components locally for maximum security. Using Anima’s advanced design-to-code engine, turn design into React, using your existing components.

The post Introducing: Frontier, the future of front-end by Anima appeared first on Anima Blog.

]]>
Reading Time: 4 minutes

In the age of generative AI, we expect AI to simply understand us. And in many cases, it already does. And it is pure magic when some tool provides exactly what you need, based on a tiny hint.

Our goal at Anima is to automate front-end engineering so humans don’t waste their time. During 2023, Anima’s AI produced over 750k code snippets, the equivalent of 1,000 years worth of human coding. With over 1 Million installs on Figma’s platform, Anima is leading the design to code space.

As the next phase, we take a deeper path into automating front-end day-to-day coding.

Today’s LLMs do not understand Front-end and UI

There are many models around code generation, from code completion to instructions. There are multiple popular Copilots – Coding assistants that help you code faster and are based on these code models.

However, when it comes to Front-end automation, we believe there’s a big gap between what’s out there and what’s possible. With Anima’s capabilities and our understanding of this domain, we’re aiming to solve this gap.

And so, today, we announce Frontier – An AI coding assistant for developers building Front-end.

Frontier in VSCode

Frontier – AI Code-gen with your code in mind, tailored for frontend

Anima Frontier meets developers where they’re at, the IDE. Starting with VSCode, the most popular IDE.

First, Frontier analyzes the entire codebase and maps your code design system, frameworks, conventions, and components. This part takes seconds and is done locally, so your code is as secure as possible.

Second, using Anima’s state-of-the-art design-to-code engine, Frontier analyzes your design and simply understands what’s in the design version and the code of the design system.

And lastly, you could pick any part of the Figma design right inside VSCode, and get code based on YOUR code. And it is magical.

Start Free

Check out this walkthrough of Frontier by Andrico, developer at Anima

Andrico thumbnail Frontier walkthrough

Increasing Design-system adoption with automation

Mature projects often have hundreds of components, if not thousands.
Design-system governance and adoption are challenging tasks that are crucial for maintaining these projects. Automation helps.

the safe path towards design-system adoption is automation

AI Security and Guardrails

Frontier was built from the ground up to offer an Enterprise-level secured solution.

AI adoption in Enterprise companies has more friction due to popular privacy concerns:

  • Egress privacy: How do we ensure that our code doesn’t “leak” into the LLM model through training, which means other companies might receive snippets of our code?
  • Ingress privacy: How do we ensure that other companies’ code that might have been fine-tuned or trained into the LLM, doesn’t enter our code base – causing security and potentially copyright concerns?

In order to generate code that integrates Anima’s interpretation of the Figma design, but uses the components in the user’s code base, we could have taken the easy way and simply trained the LLM around the code base. This has severe privacy and security implications, as we would have needed to upload a significant amount of user/enterprise code and train a custom LLM around it. We realize how critical security and privacy are, particularly to developers in Enterprise environments. We therefore took a completely different direction.

Instead of uploading code to the cloud, we implemented local data gathering, indexing, and ML models, locally, inside VS Code. These identify and index the relevant code on the developer’s machine. The gathered information is stored locally, as part of the code base, which means it can be shared securely within the team through Git – and not through the cloud. When a particular component needs to be instantiated, we are able to perform a significant amount of preprocessing locally and send the LLM in the cloud only a small amount of code and information it needs – not enough to expose the enterprise to any risk in Ingress or Egress. This innovative approach has the added benefit of performance, as most of the operations are done on the developer’s fast computer.

Under the hood of Frontier – LLMs, ML, and AI-first architecture

Anima Frontier is automating the front-end with AI, based on Anima’s vast experience in leading this space and utilizing the most advanced tech for the mission.

We often see impressive weekend projects that are 99% powered by LLMs and have amazing results 30% of the time. These are cool projects, but they are not suitable for professionals.

LLMs, as powerful as they are, open new doors but aren’t silver bullets; they require a supporting environment. At Anima, we test and benchmark, choosing the right tool for the right task. When it comes to LLMs, we provide them with context, validating their results and setting them up for success.

In the process of solving this complex problem, we broke it down into tens of smaller problems and requirements. Some problems require creativity and are solved best with LLMs, and specific models are faster and perform better than others. Some of these problems require classic Machine-learning / Computer Vision problems, i.e. classification rather than generation. Some are solved best with heuristics.

By combining the best-of-class solutions for each individual problem, we can produce mind-blowing results with minimal risk of LLM Hallucinations, which are so prevalent in LLM-based code solutions.

What’s next for Frontier

As we look to utilize everything possible with AI to help developers code Front-end faster, it feels that we’re just scratching the surface. Anima Frontier should be able to merge code with design updates, heal broken code, understand states and theming, name elements correctly, read specs, and think more and more like a human developer.

We have a rich list of features, and we need you to tell us what bothers you most and what you’d expect AI to do for front-end developers today. Join the conversation on Anima’s Discord channel.

 

Start Free

 

The post Introducing: Frontier, the future of front-end by Anima appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/introducing-frontier-the-future-of-front-end-by-anima/feed/ 0
GenAI Figma to Code: 6 Examples of how to use Anima’s new AI Code Customization https://www.animaapp.com/blog/genai/genai-figma-to-code-6-examples-of-how-to-use-animas-new-ai-code-customization/ https://www.animaapp.com/blog/genai/genai-figma-to-code-6-examples-of-how-to-use-animas-new-ai-code-customization/#respond Tue, 20 Feb 2024 21:51:02 +0000 https://www.animaapp.com/blog/?p=9542 Reading Time: 5 minutes Anima's latest innovation, GenAI code personalization within Figma, is game-changing for front-end developers. This feature introduces a layer of customization that speaks directly to the developer's style and technical requirements. Let’s see how Anima’s GenAI helps you add code conventions, styles, behaviors, and animations.

The post GenAI Figma to Code: 6 Examples of how to use Anima’s new AI Code Customization appeared first on Anima Blog.

]]>
Reading Time: 5 minutes

Anima’s latest innovation, GenAI code personalization within Figma, is game-changing for front-end developers. This feature introduces a layer of customization that speaks directly to the developer’s style and technical requirements. 

Developers can use simple prompts to guide the code generation process to use their specific coding conventions, frameworks, or architectural patterns. This article explores practical use cases and examples where Anima’s GenAI empowers developers to maintain coding standards while significantly accelerating the design-to-code conversion process, opening new avenues for efficiency and collaboration in software development.

Let’s see how Anima’s GenAI helps you add code conventions, styles, behaviors, and animations.

1. Using Anima’s GenAI to add SEO-friendly semantic HTML

When creating a new web page from a Figma design, you will need to add a bunch of semantic HTML to prepare for on-page SEO. Anima GenAI offers a preset “SEO Friendly” that adds all the tags based on its understanding of your Figma design content.

select SEO friendly preset in Anima GenAI

Here we started from a Portfolio template available on the Figma community.

Portfolio design in Figma

And here is the result, after personalizing the code with the SEO-friendly preset:

SEO preset applied by Anima GenAI

In this example, Anima’s GenAI added automatically SEO features to the code:

  • Contextual Semantic Meta tags, which derive their content from the design.
  • A place holder for the application/ld+json script
  • <nav>
  • <main>
  • link target and rel
  • <footer>

2. Using Anima’s GenAI to create a responsive font with REMs

It’s generally considered better practice and more responsive to utilize REM units instead of Pixels when it comes to font sizes. REM bases its size on the root element, which can be easily controlled relative to the screen or based on media queries.

Since REM is such a common request, Anima provides a dedicated preset to convert font sizes to REM units. To test this preset, we used this Landing page, available on the Figma community.

LP with REM responsive font

In the AI personalization tab, in Presets, under “Typography”, we selected “Use REMs for font units”.

Use REMs preset in Anima

And here we go:

Before After GenAI - REMs

3. Using Anima’s GenAI to add behavior/logic to a design: the Weather App

This is a pretty neat use case, where we use GenAI to make your code work in terms of basic UI logic.

In this example, we designed a weather app. It is straightforward: the main screen with a search box, and placeholders for various pieces of information. However, a developer typically needs to figure out how to connect the design to state management and then how to retrieve the state from an API call.

To do that, we can use Anima’s GenAI to fill in these missing parts: connect search to an API endpoint and then populate the results into the various components of the App. For that, we needed to provide the API endpoint and API key as custom instructions.

"Make it work" preset by Anima GenAI


And here is a snippet from the results:

Weather app code snippet

4. Add behavior/logic to a design with the “Make it work” preset: Pokedex

“Make it work” takes this to a new level, this utilizes GenAI to analyze the figma design and “understand” what it is you are trying to build, and then tries to fill in the logic to execute on that implementation.

For this next example, we designed a mini Pokedex app, using the React + CSS + Typescript setting.

Pokemon app in FigmaWithout Personalization, you would get the high-fidelity React version of this app, and you would still be left with a lot of work hooking up the various components to state and executing the API call. So, we turned on “Make it work”

"Make it work" preset by Anima GenAI

Tip: You may get better results with the “Smart” option rather than the “Fast” option when going for more complex tasks.

You can see below that the AI has added state management and also found the Pokemon API all by itself, understood how to use await fetch to fetch the results, set the API results in their respective fields properly, and supported console errors if the request failed.

Smart option for the Make it work preset

Tip 2: If the preset is not giving you correct behaviors, feel free to add in additional free text instructions in order to make it understand what it is you’re trying to achieve. For example, when we created a game of pong we had to explain to the API that the ball needs to bounce off the paddles and the top and bottom of the screen.

5. Using Anima’s GenAI to add animation

Here we used another variation of the Landing Page UI Kit.

LP before Entrance animation

While this does look great, why not improve on it by adding in some fun animations to the entrance? In this case, we just selected HTML+CSS and turned on the “Add entrance animation” preset. As before, you can add more explanations of your expectations of the animations in custom instructions.

Add entrance animation with Anima GenAi

And here we go, after a few seconds:

 

6. Using Anima’s GenAI to change code convention

Let’s look at the Pokemon app we covered earlier. By adding a custom instruction, you can modify the code styles and conventions.
Here we added a Custom Instruction “Use React with classes”Use React classes

See below the before and after, adding to the “make it work” preset some extra instruction.

React hooks vs React classes

With Custom instructions, the options are limitless. Like with every AI tool, it might need a few tweaks, and you might experience that the code generation is slower than without personalization. But it is worth it!

Why not try Anima GenAI and share your results with us?

Need a step-by-step tutorial? Read the docs here​​ 🙌

The post GenAI Figma to Code: 6 Examples of how to use Anima’s new AI Code Customization appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/genai-figma-to-code-6-examples-of-how-to-use-animas-new-ai-code-customization/feed/ 0