Anima Blog https://www.animaapp.com/blog/ Tue, 06 Aug 2024 11:26:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 New Feature Alert: Enhance Your Enterprise Security with Single Sign-On (SSO) Now Available in Anima’s Enterprise Plan https://www.animaapp.com/blog/enterprise/new-feature-alert-enhance-your-enterprise-security-with-single-sign-on-sso-now-available-in-animas-enterprise-plan/ https://www.animaapp.com/blog/enterprise/new-feature-alert-enhance-your-enterprise-security-with-single-sign-on-sso-now-available-in-animas-enterprise-plan/#respond Tue, 06 Aug 2024 11:26:28 +0000 https://www.animaapp.com/blog/?p=10123 Reading Time: 2 minutes Announcing the launch of Single Sign-On (SSO) support as part of our Enterprise Plan! This new feature is designed to enhance security, streamline access, and simplify user management for organizations of all sizes. With support for the Security Assertion Markup Language (SAML) protocol, we are taking authentication to the next level.

The post New Feature Alert: Enhance Your Enterprise Security with Single Sign-On (SSO) Now Available in Anima’s Enterprise Plan appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

We are thrilled to announce the launch of Single Sign-On (SSO) support as part of our Enterprise Plan! This new feature is designed to enhance security, streamline access, and simplify user management for organizations of all sizes. With support for the Security Assertion Markup Language (SAML) protocol, we are taking authentication to the next level.

What is SSO and SAML?

Single Sign-On (SSO) is a powerful authentication process that allows users to access multiple applications with a single set of credentials. By integrating SSO, organizations can improve security and provide a seamless login experience for their teams. Gone are the days of juggling multiple usernames and passwords.

SAML, or Security Assertion Markup Language, is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider (IdP) and a service provider (SP). SAML enables SSO by securely transmitting user identities from the IdP to the SP, ensuring that users can access resources without needing to re-enter credentials.

How SSO Works

Integrating SSO is a collaborative effort between us and the client. Here’s how the process works:

  1. Exchange of Information:

    To get started, the client provides us with their Identity Provider (IdP) metadata. This metadata includes essential details such as the IdP certificate, attribute mapping, and other necessary information.

  2. Configuration and Setup:

    In return, we provide the client with our Service Provider (SP) metadata, which includes our certificate, assertion service URL, and other relevant information. This exchange ensures that both parties have the necessary data to configure their systems securely.

  3. Seamless Login Experience:

    Once the setup is complete, users can enjoy a seamless login experience. After selecting the SSO option, users will enter their work email and continue. They will then be redirected to the IdP login page for authentication. Once authenticated, users will be redirected back to the web app to complete the login or signup process with Anima.

Why Choose SSO?

Implementing SSO offers several key benefits:

  • Enhanced Security: By centralizing authentication, SSO reduces the risk of password breaches and provides an additional layer of security. Users no longer need to manage multiple passwords, which reduces the likelihood of weak or reused passwords.
  • Streamlined Access: SSO simplifies the login process, allowing users to access multiple applications with a single set of credentials. This streamlined approach boosts productivity and eliminates the need to remember numerous passwords.
  • Simplified Management: For IT teams, SSO offers a centralized platform to manage user access and permissions. Onboarding and offboarding become more efficient, reducing administrative overhead and ensuring compliance with security policies.

Getting Started with SSO

This new SSO feature is available exclusively as part of our Enterprise Plan. If you’re interested in upgrading or learning more about how SSO can benefit your organization, please contact our sales team.

Stay tuned for more exciting updates!

 

The post New Feature Alert: Enhance Your Enterprise Security with Single Sign-On (SSO) Now Available in Anima’s Enterprise Plan appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/enterprise/new-feature-alert-enhance-your-enterprise-security-with-single-sign-on-sso-now-available-in-animas-enterprise-plan/feed/ 0
Minimizing LLM latency in code generation https://www.animaapp.com/blog/frontier/minimizing-llm-latency-in-code-generation/ https://www.animaapp.com/blog/frontier/minimizing-llm-latency-in-code-generation/#respond Thu, 01 Aug 2024 09:05:33 +0000 https://www.animaapp.com/blog/?p=10119 Reading Time: 2 minutes Discover how Frontier optimizes front-end code generation with advanced LLM techniques. Explore our solutions for balancing speed and quality, handling code isolation, overcoming browser limitations, and implementing micro-caching for efficient performance.

The post Minimizing LLM latency in code generation appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

Optimizing Frontier’s Code Generation for Speed and Quality

Introduction

Creating Frontier, our generative front-end coding assistant, posed a significant challenge. Developers demand both fast response times and high-quality code from AI code generators. This dual requirement necessitates using the “smartest” language models (LLMs), which are often slower. While GPT-4 turbo is faster than GPT-4, it doesn’t meet our specific needs for generating TypeScript and JavaScript code snippets.

Challenges

  1. Balancing Speed and Intelligence:

    • Developers expect rapid responses, but achieving high-quality code requires more advanced LLMs, typically slower in processing.
  2. Code Isolation and Assembly:

    • We need to generate numerous code snippets while keeping them isolated. This helps us identify each snippet’s purpose and manage their imports and integration.
  3. Browser Limitations:

    • Operating from a browser environment introduces challenges in parallelizing network requests, as Chromium browsers restrict the number of concurrent fetches.

Solutions

To address these challenges, we implemented a batching system and optimized LLM latency. Here’s how:

Batching System

  1. Request Collection:

    • We gather as many snippet requests as possible and batch them together.
  2. Microservice Architecture:

    • These batches are sent to a microservice that authenticates and isolates the front-end code from the LLM, ensuring secure and efficient processing.
  3. Parallel Request Handling:

    • The microservice disassembles the batch into individual requests, processes them through our regular Retrieval-Augmented Generation (RAG), multi-shot, and prompt template mechanisms, and issues them in parallel to the LLM.
  4. Validation and Retries:

    • Each response is analyzed and validated via a guardrail system. If a response is invalid or absent, the LLM is prompted again. Unsuccessful requests are retried, and valid snippets are eventually batched and returned to the front end.

Micro-Caching

We implemented micro-caching to enhance efficiency further. By hashing each request and storing responses, we can quickly reference and reuse previously generated snippets or batches. This reduces the load on the LLM and speeds up response times.

Conclusion

The impact of parallelization and micro-caching is substantial, allowing us to use a more intelligent LLM without sacrificing performance. Despite slower individual response times, the combination of smart batching and caching compensates for this, delivering high-quality, rapid code generation.

The post Minimizing LLM latency in code generation appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/frontier/minimizing-llm-latency-in-code-generation/feed/ 0
Introducing Frontier’s New Feature: Code Injection https://www.animaapp.com/blog/product-updates/introducing-frontiers-new-feature-code-injection/ https://www.animaapp.com/blog/product-updates/introducing-frontiers-new-feature-code-injection/#respond Thu, 25 Jul 2024 06:58:19 +0000 https://www.animaapp.com/blog/?p=10078 Reading Time: 2 minutes This feature enhances your ability to seamlessly integrate generated code from Figma into your existing projects, saving time and reducing the need for manual copy-pasting.

The post Introducing Frontier’s New Feature: Code Injection appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

We are excited to announce the release of a powerful new feature in Frontier: Code Injection. This feature enhances your ability to seamlessly integrate generated code from Figma into your existing projects, saving time and reducing the need for manual copy-pasting.

Why Did We Create Code Injection? 🤔

  1. We noticed that many of our users were exporting only parts of the code from Figma, often leading to broken implementations. A complete component needs all its pieces— index (TSX or JSX), CSS, assets, and the right styleguide references—to work properly.
  2. We heard from you that manually copying and pasting each file was quite tedious. Downloading assets from one place and uploading them to another? Yawn! 😴

We knew there had to be a better way. Enter Code Injection. We developed this feature to streamline your workflow, making the process of integrating design into development as seamless as possible.

How Does It Work? 🛠

Example Scenario: Implementing a Subscribe Modal Component

The Figma Design:

Figma design example
You open the Figma design and see that it includes:

  • A few input fields (that you already have in your code ✅ – <Input>)
  • A submit button (that you haven’t created in code yet ⭕)
  • A checkbox (that you haven’t created in code yet ⭕)
  • Some text and an icon (non-component elements)

1. Provide your design to Frontier in VScode

  1. Paste the Figma link
  2. Select the Modal component
  3. Click “Inject component”

 

2. The Injection magic:

  1. Frontier will detect that you already have an <input> component, but missing the <button> and <checkbox> components.
  2. Frontier will generate and inject the <button> and <checkbox> components to your source code, with all the necessary folders and files (e.g., tsx, CSS, assets).
  3. Frontier will build a <Modal> component:
    1. Components: imports your existing <input> component and the newly generated <button> and <checkbox> components
    2. Non-Component Elements: Frontier includes inline code for simple elements like text and icons directly within the generated component.

 

Code example

Here’s how the code for a “Modal” component might look after using Code Injection:Code inject example

Get Started 🚀

Try out the new Code Injection feature today and streamline your design-to-code workflow with Frontier! Your feedback is crucial as we continue to enhance Frontier’s capabilities.

Why Use Code Injection? 🌟

  • Efficiency: Automatically generate and integrate components directly into your project, reducing manual coding effort.
  • All-in-One: Generate your component with all its necessary files and assets in one click, streamlining your workflow.

Feel free to reach out if you have any questions or need assistance. We’re here to support your journey to more efficient and consistent coding!

Happy coding! ✨

Get Frontier

The post Introducing Frontier’s New Feature: Code Injection appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/product-updates/introducing-frontiers-new-feature-code-injection/feed/ 0
Convert Figma to React & Tailwind Automatically in VSCode https://www.animaapp.com/blog/product-updates/convert-figma-to-react-tailwind-automatically-in-vscode/ https://www.animaapp.com/blog/product-updates/convert-figma-to-react-tailwind-automatically-in-vscode/#respond Wed, 10 Jul 2024 18:18:01 +0000 https://www.animaapp.com/blog/?p=9978 Reading Time: 2 minutes Frontier seamlessly transforms Figma files into React code, perfectly integrating with your existing Tailwind configurations. Frontier meets you where you work, in VS Code.

The post Convert Figma to React & Tailwind Automatically in VSCode appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

Are you a frontend developer who loves using Tailwind CSS for its utility-first approach and flexibility? If so, you understand the challenges of translating Figma designs into Tailwind-enhanced React components. Aligning new components with both design fidelity and your established styling conventions can be time-consuming.

That’s where Frontier comes in—a revolutionary tool that seamlessly transforms Figma files into React code, perfectly integrating with your existing Tailwind configurations. Frontier meets you where you work, in VS Code.

Effortless Figma to React Conversion in VSCode

  • Converting Figma designs into React components is more streamlined with Frontier. Here’s how it enhances your workflow:
  • Automatic Component Detection: Frontier scans your Figma design and identifies potential direct matches with existing React components in your codebase.
  • Component Reuse: Frontier generates code that reuses your existing components, enhancing efficiency and reducing code duplication.
  • Tailwind CSS Code Generation: Automatically generates the necessary React code with Tailwind classes applied, preserving the intended design aesthetics while adhering to your predefined styles.
  • Reduce Redundancy: This approach not only accelerates development but also helps keep your codebase clean and manageable.

(Not using VSCode? Translate Figma to Tailwind in Figma)

Seamless Integration with Your Tailwind Config

Frontier does more than just convert designs—it ensures the generated code integrates flawlessly with your existing project frameworks:

  • Tailwind Config Utilization: Detects and uses your tailwind.config.js file, allowing all generated components to inherit your custom styling rules automatically.
  • Intelligent Style Application: Ensures that every component not only matches the design specs but also aligns with your established Tailwind conventions. If needed, Frontier will generate new style configurations that you can then add to your original config file.

For front-end developers using Tailwind CSS, Frontier offers a powerful way to enhance your development workflow. It ensures precise translation of your Figma designs into React components and maintains style consistency through smart integration with your Tailwind setup.

Start using Frontier today and take your Tailwind projects to the next level, where design meets code not just with accuracy, but with style 😉

The post Convert Figma to React & Tailwind Automatically in VSCode appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/product-updates/convert-figma-to-react-tailwind-automatically-in-vscode/feed/ 0
Guard rails for LLMs https://www.animaapp.com/blog/genai/guard-rails-for-llms/ https://www.animaapp.com/blog/genai/guard-rails-for-llms/#respond Thu, 04 Jul 2024 15:23:43 +0000 https://www.animaapp.com/blog/?p=9989 Reading Time: 3 minutes The conclusion is that you cannot ignore hallucinations. They are an inherent part of LLMs and require dedicated code to overcome. In our case, we provide the user with a way to provide even more context to the LLM, in which case we explicitly ask it to be more creative in its responses. This is an opt-in solution for users and often generates better placeholder code for components based on existing usage patterns.

The post Guard rails for LLMs appeared first on Anima Blog.

]]>
Reading Time: 3 minutes

Implementing Guard Rails for LLMs

Large Language Models (LLMs) have made a profound leap over the last few years, and with each iteration, companies like OpenAI, Meta, Anthropic and Mistral have been leapfrogging one another in general usability and, more recently, with the ability of these AI models to produce useful code. One of the critical challenges in using LLMs, is ensuring the output is reliable and functional. This is where guard rails for LLM become crucial.

Challenges in Code Generation with LLMs

However, as they are trained on a wide variety of code techniques, libraries and frameworks, trying to get them to produce a unique piece of code that would run as expected is still quite hard. Our first attempt at this was with our Anima Figma plugin, which has multiple AI features. In some cases, we intended to expand our ability to address new language variations and new styling mechanisms without having to create inefficient heuristic conversions that would be simply unscalable. Additionally, we wanted users to personalize the code we produce and have the capability of adding state, logic and more capabilities to the code that we produce from Figma designs. This proved much more difficult than originally anticipated. LLMs hallucinate, a lot.

Fine-tuning helps, but only to some degree – it reinforces languages, frameworks, and techniques that the LLM is already familiar with, but that doesn’t mean that the LLM won’t suddenly turn “lazy” (putting comments with /* todo */ instructions rather than implementing or even repeating the code that we wanted to mutate or augment). It’s also difficult to avoid just plain hallucinations where the LLM invents its own instructions and alters the developer’s original intent.

But as the industry progresses, LLM laziness goes up and down and we can use techniques like multishot and emotional blackmail to ensure that the LLM sticks to the original plan. But in our case, we are measured by how well the code we produce is usable and visually represents the original design. We had to create a build tool that evaluated the differences and fed any build and visual errors back to the LLM. If the LLM hallucinates a file or instructions, the build process catches it and the error is fed back to the LLM to correct, just like a normal loop” that a human developer would implement. By setting this as a target, we could also measure how well we optimized our prompt engineering, Retrieval-Augmented Generation (RAG) operations and which model is ideally suited for each task.

 

Strategies for Implementing Guard Rails

 
This problem arose again when we approached our newest offering: Frontier, the VSCode Extension which utilizes your design system and code components when it converts Figma designs to code.
In this case, a single code segment could have multiple code implementations that could take in additional code sections as child components or props, yielding the need for much tighter guardrails for the LLM. Not only do we need to use all the previous tools, but we also need to validate the results it produced are valid code. This needed to happen very quickly, which meant that a “self-healing” approach wouldn’t work. Instead, we are able to identify props and values using the existing codebase, combined with parsing the Typescript of the generated code to ensure that it makes sense and is valid code against the code component that we have chosen to embed in a particular area in the code base. Interestingly, despite the LLMs generating very small function call and getting a fair amount of context and multi-shot examples, they do hallucinate more often than expected. Fine-tuning might help with that, but we assumed that this is an inherent piece of the technology and requires tight guardrails.
 
That means that for each reply from the LLM we first validate that it’s a valid response, and if it is invalid we will explain to the LLM what’s wrong with it and ask it to correct. In our experience a single retry shot often does the trick and if it fails, it will likely fail in subsequent rounds. Once an initial validation is passed we actually go through the reply and validate that it makes sense, we have a few simple validation heuristics that improve the success rate dramatically. 
 

Conclusion: The Necessity of Guard Rails for LLMs

Hallucinations are an inherent challenge with LLMs, that cannot be ignored. They are an inherent part of LLMs and require dedicated code to overcome. In our case, we provide the user with a way to provide even more context to the LLM, in which case we explicitly ask it to be more creative in its responses. This is an opt-in solution for users and often generates better placeholder code for components based on existing usage patterns. Interestingly, when we apply this to component libraries that the LLM was trained upon (MUI, for example, is quite popular) the hallucinations increase as the LLM has prior bias towards those component implementations and the guard rails are particularly useful there.
 
Start using Frontier for free and experience the benefits of robust guard rails for LLM in your code generation process.

The post Guard rails for LLMs appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/guard-rails-for-llms/feed/ 0
Pluggable design system – Figma to your design system code https://www.animaapp.com/blog/genai/pluggable-design-system-figma-to-your-design-system-code/ https://www.animaapp.com/blog/genai/pluggable-design-system-figma-to-your-design-system-code/#respond Tue, 02 Jul 2024 14:35:35 +0000 https://www.animaapp.com/blog/?p=10001 Reading Time: 3 minutes When we created Frontier, we didn’t want to stick to just one coding design system. MUI, for example, is a very popular React Design System, but it’s one of many design systems that are rising and falling. Ant Design is still extremely popular, as is the TailwindCSS library. We’re seeing the rapid rise of Radix based component libraries like ShadCN as are Chakra and NextUI.

The post Pluggable design system – Figma to your design system code appeared first on Anima Blog.

]]>
Reading Time: 3 minutes

Design to code is a difficult problem to crack, there are so many variations to consider. On the Figma side, we have to consider auto layouts, design tokens, component sets, instances and Figma variables. On the code side, we have to assume that the codebase could contain both local and external components that could come from anywhere.

That’s why, when we created Frontier, we didn’t want to stick to just one coding design system. MUI, for example, is a very popular React Design System, but it’s one of <very> many design systems that are rising and falling. Ant Design is still extremely popular, as is the TailwindCSS library. We’re seeing the rapid rise of Radix-based component libraries like ShadCN as are Chakra and NextUI. However, we knew that if we wanted to reach a wide audience we could not rely on a limited subset of Design Systems, we had to create a “pluggable design system”.

Key Challenges in Implementing a Pluggable Design System

There are a few challenges to accomplishing this:

    1. Existing Project Integration:

      You have an existing project that already uses a design system. In this case, we are expected to scan the codebase, and understand and reuse the design system. We do this when Frontier starts, it looks through your codebase for local and external components (you can restrict where it actually scans and also control how deeply it looks at the code) for your code components and usages of those code components.

    2. Design and Code Component Mismatch:

      When we look at the Figma design, we don’t assume that the designer has a clear understanding of which component system will be utilized to implement the design. Typically, if this is an Enterprise with a Design System Team, the components in the design will match in design. Still, not necessarily in their name, variants nor have a 1:1 match between the Figma and code component counterparts. In fact, the same design could be used with different Design Systems code components and fully expected to match and work.

    3. Flexible Implementation:

      Once applied, components could have multiple ways to implement overrides and children:

      1. Props / variants
      2. Component children
      3. Named slots
    4. The “Cold start” problem

      Even if you solve scanning the project’s repo, what happens when you encounter a brand new project and want to use a new library with it? In this case, you would have zero code usage examples and zero components that you are aware of…

To overcome these problems we started with a few assumptions:

    1. Leverage Usage Examples:

      the project has a robust set of usage examples, we can take inspiration from them and understand how this particular project utilizes those components, which will help us solve the prop/overrides/children/named-slots issue.

    2. Custom Matching Model

      We had to create a custom model that understands how designers in design systems implement their components and how developers code the code components. This matching model was trained on a large set of open source Design System repos and open Figma design systems. It reached a surprisingly high matching rate on all our tests. Looks like many designers and many developers think in similar ways despite using very different conventions and actual designs.

    3. Cross-System Matching

      Once we were able to match within the same design system, the next challenge was to make the model more robust with matching across design systems – take a design that relies on AntD components and train the model to implement it using MUI components, or vice versa. This made the model much more versatile.

    4. Local Storage for Privacy and Security

      For security and privacy purposes, we have to encode and store our RAG embeddings database locally, on the user’s machine. This allows us to perform much of the work locally without having to send the user’s code to the cloud for processing.

       

Interestingly, the fact that we can store bits and pieces of this databases, also opens up possibilities with cold starts. An empty project can now easily state that it wants to use MUI and simply download and use the embeddings. That gives the usage LLMs all the context that’s needed to produce much more robust results, even when the codebase is completely empty from any actual context.

The result is that Frontier can now generate code components in projects, even if the Design System doesn’t actually match the code design library and even when the codebase is completely devoid of any actual examples.

The post Pluggable design system – Figma to your design system code appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/pluggable-design-system-figma-to-your-design-system-code/feed/ 0
Does Frontier support NextJS? https://www.animaapp.com/blog/genai/does-frontier-support-nextjs/ https://www.animaapp.com/blog/genai/does-frontier-support-nextjs/#respond Fri, 21 Jun 2024 07:43:53 +0000 https://www.animaapp.com/blog/?p=9992 Reading Time: 2 minutes Short answer: Yes!
Frontier will generate client components by default, when it detects NextJS. This is done by adding the ‘use-client’ statement at the beginning of the component declaration. 

The post Does Frontier support NextJS? appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

Short answer: Yes!

Long answer:

NextJS is an extremely popular framework for ReactJS that provides quite a few benefits, one of which is the mix between server and client-side components. 

Server-only components are components that do not use/need state and can pull their data from external APIs without worrying about credentials falling into the wrong hands. They can only be rendered on the server. Server components may contain server and/or client components.

Client-only components are components that have the “use client” directive defined. A component that uses state and other React APIs needs to be a client component, but a client component doesn’t require state to function.

In Next.js, components are server components. This ensures that fully-formed HTML is sent to the user on page load. It’s up to the developer’s discretion to set the client boundaries. If components are not using state and are not making outward API calls, they can be implemented both as client and server, which is ideal. 

Since it can be quite complex to determine, which type of component a particular React component is (Server only, Client only, agnostic), Frontier will generate client components by default, when it detects NextJS. This is done by adding the ‘use-client’ statement at the beginning of the component declaration. 

This issue arises because it can be challenging to identify if the rendered component tree includes descendants that must be rendered on the client side. Without a ‘use client’ directive for those components, runtime errors may occur.

If you remove the ‘use-client’ and the code still builds with no errors, this means that the client boundaries have been set correctly, and you can let Next.js determine whether the component is rendered on the client or the server. If, on the other hand, removing it causes a build error, it means that one or more of the descendants uses client-only APIs, but hasn’t declared itself as a client component. In this case, you can add the ‘use-client’ statement in the code we’ve created, or add the directive directly inside of the offending descendant.

So, what’s the bottom line?

Short answer: Yes, Frontier supports NextJS!

Start here!

The post Does Frontier support NextJS? appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/does-frontier-support-nextjs/feed/ 0
Generative code: how Frontier solves the LLM Security and Privacy issues https://www.animaapp.com/blog/genai/generative-code-how-frontier-solves-the-security-and-privacy-issues/ https://www.animaapp.com/blog/genai/generative-code-how-frontier-solves-the-security-and-privacy-issues/#respond Wed, 05 Jun 2024 14:17:50 +0000 https://www.animaapp.com/blog/?p=9968 Reading Time: 3 minutes AI and LLM code generation typically suffer from Privacy and Security issues, particularly with Enterprise users. Frontier is a VSCode that generates code through LLMs, which uses local AI models in order to firewall the user's data and codebase from being exposed to the LLM. This unique approach isolates the codebase and ensures compliance and inter-developer cooperation without compromising the security of the code repo.

The post Generative code: how Frontier solves the LLM Security and Privacy issues appeared first on Anima Blog.

]]>
Reading Time: 3 minutes

When it comes to generative AI and LLMs, the first question we get is how we approach the security and privacy aspects of Frontier. This is a reasonable question given the copyright issues that many AI tools are plagued with. AI tools, after all, train on publicly available data and so could expose companies to potential copyright liability.

But it’s not just that, companies have invested heavily in their design language and design systems, which they would never want to expose externally and their code base is also a critical asset which they would never want to partake in LLM or AI training. 
 
When designing Frontier, privacy and security were foremost concerns from day one. First, it was clear to us that Frontier users cannot expose their codebase to anyone, including us. That means that much of the data processing had to take place on the user’s device, which is quite difficult given that we are running in a sandbox inside a VSCode Extension. Secondly, we needed to expose the minimum amount of data and design to the cloud. Additionally, any data that needed to be stored, had to be stored in such a way where it could be shared by multiple team members, but should not be stored on the cloud. Finally, none of our models could have any way to train from the user’s design or codebase.
The first part was isolating the Figma designs. By building a simplified data model, built in memory from within VSCode, using the user’s own credentials, we are effectively facilitating an isolated connection between the user and Figma APIs without us in between and without our servers even seeing a copy of the design.
 
The typical implementation used for generative code generation is to collect the entire code base, break it into segments, encode the segments into embeddings and storing them into a vector database. This approach is effective but won’t work well in our case, since storing this data on our servers would mean we are exposed to the data. In addition, the code base is continually evolving and would need to be reencoded and stored every so often, which would make this process slow and ineffective. 
 
Instead, our approach was to develop an in-memory embedding database, which can be stored and retrieved locally and rebuilds extremely quickly, even on large codebases. In order to secure this data, we store it on the user’s workspace, where it can be included in the git repository and shared between the users, or simply rebuilt per-user.
 
But this would be useless if we would have to send a large code sample to an LLM for each line of code we generate. Instead, we implemented a local model that runs in VSCode, so when we do need to use an LLM, we share the interface of the components instead of needing code. Users can improve the results by opting in to include some real-world usage examples of how Button is used in the codebase, sharing with the LLM a simplified thin code showing how Button component is used in the code base, but not how Button is implemented or what it actually looks like or does…
 
By limiting the amount of data and anonymizing it, we can guarantee that the LLM doesn’t get trained or store the user’s code in any way.
 
But how do we guarantee that data doesn’t get “leaked” from outside sources that the LLM trained on back into the codebase, exposing the company to potential copyright risk? First, we limit the type of code that the LLM can generate to specific component implementations, only after it passes a guard rail system. The LLM Guard rail validates the code makes sense, and can identify hallucinations that might invalidate the code or introduce copyright liability to the code base. If the code passes the guard rail system, we are extremely sure that the results correlate with what the user expects from the component code.
 
Finally, for full transparency, we store the data in open JSON files inside the .anima folder on your project’s workspace. Different workspaces would have different settings and components. Sharing this information between users can be done through git (or a shared file system of any kind), which eliminates Anima from being exposed to any of the cached data for components, usage or the entire codebase or Figma design data.

The post Generative code: how Frontier solves the LLM Security and Privacy issues appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/genai/generative-code-how-frontier-solves-the-security-and-privacy-issues/feed/ 0
Joining a New Project? Code Smarter and Faster from Your First Day https://www.animaapp.com/blog/industry/joining-a-new-project-code-smarter-and-faster-from-your-first-day/ https://www.animaapp.com/blog/industry/joining-a-new-project-code-smarter-and-faster-from-your-first-day/#respond Tue, 04 Jun 2024 15:17:41 +0000 https://www.animaapp.com/blog/?p=9941 Reading Time: 2 minutes Frontier, our innovative coding assistant, is designed to seamlessly integrate newcomers into the development process, making the transition smooth and efficient.

The post Joining a New Project? Code Smarter and Faster from Your First Day appeared first on Anima Blog.

]]>
Reading Time: 2 minutes

Joining a new project can be as exciting as it is daunting, especially when you need to familiarize yourself quickly with the existing codebase and development practices. Frontier, our innovative coding assistant, is designed to seamlessly integrate newcomers into the development process, making the transition smooth and efficient.

Here’s how Frontier can be a game-changer for developers new to a project

  1. Effortless Component Discovery:

    • Seamless Integration: Frontier eliminates the need to manually search for components. Its advanced matching algorithms automatically identify and suggest the right components from the existing codebase that correspond to elements in your Figma designs.
    • Accelerated Learning Curve: This feature not only speeds up the development process but also facilitates a deep understanding of the component architecture without the need to sift through documentation or seek extensive input from senior developers.
  2. Learn from the Best with Contributor Insights:

    • Follow Proven Practices: Frontier provides details about the last contributor and modification dates for each component usage, guiding you to follow coding patterns endorsed by top developers within your team.
    • Access to Mentorship: Highlighting contributors also helps identify potential mentors, offering insights into whom to approach for advanced learning and advice on adhering to the best practices.
      Frontier - code usage - last edited
  3. Streamlined Onboarding Process

    • Rapid Contribution: Frontier’s deep integration with your project’s existing structures allows you to start contributing meaningful code almost immediately, minimizing the usual learning and adjustment period.
    • Consistent Code Quality: Frontier respects and adapts to your project’s established coding conventions, ensuring all new code is consistent and harmonious with existing development standards.

Get Frontier

Here’s why Frontier can be a game-changer for Managers and Teams:

  1. Accelerate Developer Ramp-Up

    Drastically shorten the learning curve for new developers, enabling quicker and more impactful contributions.

  2. Ensure Coding Consistency:

    Maintain a high standard of code quality from day one, minimizing the need for later corrections and ensuring consistency across the project.

  3. Boost Team Collaboration:

    Create a supportive environment where new developers are well-informed about team coding responsibilities and patterns, fostering better communication and collaboration.

      

Frontier isn’t just a tool; it’s your partner in coding. By removing the common barriers new developers face, Frontier allows you to focus on what you do best: coding solutions that matter.

Start your journey with Frontier today and experience a smoother, more intuitive integration into your new project.

Get Access

The post Joining a New Project? Code Smarter and Faster from Your First Day appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/industry/joining-a-new-project-code-smarter-and-faster-from-your-first-day/feed/ 0
LLMs Don’t Get Front-end Code https://www.animaapp.com/blog/opinions/llms-dont-get-front-end-code/ https://www.animaapp.com/blog/opinions/llms-dont-get-front-end-code/#respond Tue, 28 May 2024 08:57:35 +0000 https://www.animaapp.com/blog/?p=9931 Reading Time: 3 minutes Ofer's piece delves into the evolving role of AI in front-end development, debunking myths about replacing human developers. Share your thoughts with us too!

The post LLMs Don’t Get Front-end Code appeared first on Anima Blog.

]]>
Reading Time: 3 minutes

LLMs Don’t Get Front-end Code

I see this pattern repeats every few months: A new multimodal LLM comes out, and someone on Twitter takes a screenshot of a game or app and provides it to the LLM, resulting in working code that actually runs. 
 
Hence the meme: Front End Developers, you will soon be replaced by AI…
 
After so many years of managing software, I should know better. The variation between each team and projects within each team are infinite. Each team uses a different combination of tools, frameworks, libraries, style of coding, CSS language/framework, all of which are constantly changing. Small startups will typically adopt a public Design System and adapt it to their needs, while larger companies would have their own customized Design System components through a dedicated team. Good luck asking an LLM to conform to these requirements, since it has zero context with that combination of tools and components.
 
So, good luck trying to get the LLM to code in your style, using your front-end components and have an in-depth understanding of design. At best, it can take a 2D image of your screens and make it do something… Turning that result into production code will likely take you longer than to start from scratch. 
 
More so, as the tools evolve, the level of complexity and thought that goes into these combinations make front-end Developers into professional problem solvers. They typically get an impossible Figma design, which they would have to fully understand, then negotiate changes with the designer, until they hopefully can adapt it to the design system. These are very human problems, and require human operators to drive them.

Enter: Useful generative coding

But LLMs are revolutionary and will make a huge impact on developers. Given the right context, AI can locate and correct bugs, help design the software, and turn developers into 10x individual contributors (10xIC). This is precisely what Github Copilot does: It learns from your project and given the huge amount of relevant context, it attempts to predict what you’re trying to accomplish and generate the code for that prediction. Developers get an efficiency boost using Copilot, but just one problem…
 
Copilot understands concepts like functionality, components, and state. It fundamentally does not understand design. Why would it? It has no context to the design that the front-end developer is using, so when you start creating React components, it will just give you boilerplate code that it most likely learned either from your project or from other designs. I often see it generating an endless round of meaningless HTML gibrish, it’s chance of actually succeeding to predict your design is infinity small. More so, match your particular components and giving you code that’s of value, that’s Sci-Fi…
 
That’s why many front-end designers either do not use Github copilot at all, or use it for everything apart from design. But what if you could extract context from the design? That’s where Anima Frontier comes in. Frontier has context to the Figma design, including deep understanding of the Figma components, overrides and Figma Design System, as well as your codebase and your design system code components. By matching those, and with the ability to generate scaffolding code based on the Designer’s specifications (and not a static snapshot of their design), the resulting code is a perfect code companion specifically made for front-end developers. It works together with Github Copilot to fill the void that is design.
 
We do not really think that Designers or Front End Developers are going away any time soon. We don’t think it’s realistic that they’ll be replaced by automated tools. Tools like Frontier are intended to work like Copilot – in making front-end development easier and more approachable. By providing context and assistance to the developer we can turn Front End developers more productive. This is exactly the type of tool I wish I had when I started coding – it’s the perfect way to extract the most from what the Designer has already embedded in the design, sometimes without even realizing it.

The post LLMs Don’t Get Front-end Code appeared first on Anima Blog.

]]>
https://www.animaapp.com/blog/opinions/llms-dont-get-front-end-code/feed/ 0