I'm Jasmin

FULL-STACK DEVELOPER With 5+ Years Of Experience.
Solutions expert specializing in MVP, SaaS, and AI
2buy2 Alex Event Ignite Horizon's End Lean Focus Neshealth Numberfit Thea
Selected Work

My Projects

I was team lead on these projects. Working directly with founders/entrepreneurs, we created the solutions and wrote the code.

  • LeanFocus 2.0

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    Summary with focus on technical aspects

    As the lead developer, I spearheaded the development of LeanFocus 2.0, an enterprise-level solution conceived to enhance the efficiency and real-time data tracking of hundreds of machines across multiple factories worldwide. The process of developing LeanFocus 2.0 was not without its challenges. Chief among them was the complex journey to a mutual understanding of terminology, a crucial step that became one of the crux moves throughout the development. This understanding was essential in aligning the project's objectives with the operational realities of factories, thereby ensuring that the app met the nuanced needs of its users. Additionally, designing the user experience (UX) presented its own set of challenges, especially in how real-time data needed to be effectively and intuitively presented on the Andon board. The goal was to ensure that the vast amount of real-time information was not only accessible but also actionable for users at any given moment.

    Embracing a microkernel architecture from the outset allowed us to ensure flexibility and the seamless integration of different data sources and functions. The backend, written in the Express Node.js web application framework, incorporates modern databases such as MongoDB for general data storage and InfluxDB for time-series data, crucial for real-time monitoring. Moreover, a complex custom user roles system was architected to cater to various stakeholders' needs, ensuring each user experiences a tailored interface and functionality that aligns seamlessly with their operational prerogatives.

    MQTT was utilized for reliable sensor data connectivity, underpinning the app's real-time monitoring capabilities. This technology choice, alongside the intricate efforts in terminology and UX design, has resulted in a robust platform that not only meets the immediate operational needs but also adapts to the dynamic requirements of global factories.

    App description

    LeanFocus 2.0 streamlines the monitoring and management of factory operations by offering comprehensive tracking of machinery efficiency, real-time data analysis, and intuitive operational scheduling. At its core, the app is designed to improve overall equipment effectiveness (OEE) through detailed insights into machine availability, performance, quality ratings, and scheduled versus unscheduled downtimes. It includes a master navigation system for locating factories globally, managing orders, and facilitating the seamless interaction between machine operators and factory management. The intuitive Andon board display, both for comprehensive overviews and mobile-optimized availability checks, enhances the decision-making process by rendering complex data into easily interpretable visuals. The terminal functionality further ties into the hands-on aspects of production, allowing operators to input data regarding stoppages, quality, and order progress directly adjacent to their machines.

    Technical details

    LeanFocus 2.0 is written as React frontend and Express (Node.js web development framework) backend. It’s backbone is a microkernel architecture, chosen for its robustness in supporting the distributed nature of factory operations and for facilitating agile development and deployment practices. This design choice was paramount in achieving a configurable system that could adapt to various operational scales and requirements without compromising performance or security. MongoDB and InfluxDB were selected for their scalability and their aptitude in handling complex queries and large datasets effectively, critical factors given the app's data-intensive operations. MQTT protocol was implemented for its reliable message delivery, allowing for fault-tolerant communication between the vast array of machine sensors and the central system. The software architecture was meticulously planned to include a microkernel, essential main modules, and a S.A.F.E. (Stand Alone Front End), with an additional adapter for third-party API integrations to ensure comprehensive functionality and user experience. A sophisticated, custom user roles system was developed to provide targeted access and features based on user responsibilities and needs, enhancing both security and usability.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • React
    • Express
    • MongoDB
    • InfluxDB
    • AWS
  • Idea Elaboration App

    THE STORY BEHIND THE CODE: A DEEP DIVE

    Summary with focus on technical aspects

    Developed an Idea Elaboration App focused on promoting Nuclear Energy through the generation of 130 actionable items. Leveraging information from 300 diverse sources, including books, blog posts, and interview transcriptions, the AI app ensured a comprehensive set of outcomes in alignment with project goals. The technical implementation involved using Jupyter Notebook for its flexibility and real-time feedback capabilities. Langchain's abstraction layer interfaced with the GPT-3.5-Turbo model, handling source loading, text conversion, and token splitting. OpenAI’s ada embedding model and FAISS Vector Store facilitated the creation and storage of embeddings, while Langchain's Prompt Template functionality orchestrated the iterative prompt-response cycles. Technologies used include Langchain, FAISS Vector Store, and Python with Jupyter Notebook.

    App description

    The primary objective of this project was to generate a comprehensive set of actionable items aimed at promoting the use of Nuclear Energy, based diverse sources such as scientific articles and books. Simultaneously, a human expert researcher was involved in extracting actionable items. The results, encompassing both the actionable items generated by the AI app and those derived by the researcher, were considered to be commensurate. This assessment ensures that the outcomes from both approaches were deemed proportionate and in alignment with the project's overarching goals.

    The AI app generated 130 actionable items by leveraging information from approximately 300 knowledge sources. Among these sources, four were books, while the remainder consisted of blog posts and interview transcriptions.

    Technical details

    In alignment with the app's goal of creating a document with detailed actionable items, we opted for Jupyter Notebook as our coding environment, owing to its flexibility and real-time feedback capabilities. Also, Jupyter Notebook stores results of code blocks executions. That aids fast development iterations when testing how prompt tweaks affect responses. To interface with the GPT-3.5-Turbo model, we leveraged Langchain's abstraction layer over the OpenAI Python SDK.

    The workflow began with loading sources (PDF, .txt, .docx) and converting them into raw text using Langchain's Unstructured Document Loader. Subsequently, to address the token limit of 4096 tokens OpenAI's GPT-3.5-Turbo model had at the time, we employed Langchain's Token Text Splitter to break down the sources into manageable text chunks.

    Third step was to create embeddings from those chunks and store them in the vector store. We utilized OpenAI’s ada embedding model for creating embeddings and FAISS (Facebook AI Similarity Search) vector store to store them locally on our computer. FAISS provides several similarity search methods that span a wide spectrum of usage trade-offs.

    The subsequent step involved feeding every chunk of text along with the instructions to the GPT-3.5-Turbo model to produce actionable items based on those chunks of text. After this initial prompt-response cycle, we obtained 130 actionable items from the GPT model, documented in a text file. Finally, in a subsequent prompt-response cycle, each generated actionable item from the first cycle, along with relevant chunks of text, was fed back to the GPT model. The model was prompted to elaborate on each actionable item based on the associated text chunks. This iterative process resulted in the production of a comprehensive document containing 130 elaborated actionable items.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Langchain
    • FAISS Vector Store
    • Python with Jupyter Notebook
  • TheaAI

    Health & wellness AI-powered iOS mobile app with engaging avatar-led chats

    OpenAI, Swift, SwiftUI, SwiftData, HealthKit, WidgetKit

    TheaAI

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    Summary with focus on technical aspects

    Developed TheaAI iOS app, a personalized health & wellness application utilizing HealthKit and EventKit for tailored experiences, having OpenAI LLM models empower the avatar generated insights and conversations. Employed SwiftUI and SwiftData for App Store marketing advantages stemming from the alignment with Apple's latest technologies. Overcame user experience challenges present in the domain of health apps industry by introducing WidgetKit for presenting health insights. Focused on architectural design to manage code complexity and leveraged Swift's versatility for achieving high extensibility in data processing modules and standardized communication among the code modules.

    App description

    TheaAI is a fun and personalized health & wellness iOS mobile app with engaging avatar-led chats and journeys for a tailored wellness experience. It uses HealthKit to access user's health data and personalize the app experience and uses EventKit to personalize the actionable recommendations in terms of scheduling while respecting the users calendar.

    The essence of the app is the chatbot with different avatar coaching styles (fierce, cheerleading or educational) which are chosen based on the user’s preference. Additionally, the app offers a widget that glanceably presents health insights generated based on data collected by HealthKit and EventKit.

    iOS app is live here.

    Technical details

    The decision to use the latest Apple technologies, SwiftUI and SwiftData, was driven by the marketing-related benefits associated with this choice. Specifically, the App Store tends to favor apps that leverage the latest technologies over those developed using older ones.

    Interesting challenges arose from the realization that notifications in health domain apps were not an effective means of reminding users. To address this issue, Apple's WidgetKit was employed to create an iOS widget. During the widget's development, numerous optimization challenges emerged in relation to the utilization of OpenAI models and the need to adhere to the hardware resource limitations of both iPhone and iPad.

    The architectural design of the system played a crucial role in the app's development, with a keen focus on keeping code complexity consistently monitored. This was particularly important as the entirety of the app's code is essentially within the realm of the "frontend mobile app code".

    Swift's versatile capabilities enable developers to write code in both functional and object-oriented paradigms. In the implementation of data processing-related software modules, a functional paradigm was employed, while the object-oriented paradigm was used in other parts of the system for data, interface, and communication standardization, aiming for high extensibility. The necessity for high extensibility in TheaAI system stemmed from the constant need for prompt tweaks and tests, relying on data retrieved from HealthKit and EventKit.

    Various cost optimization strategies for OpenAI were brainstormed, including the use of more economical models where the decrease in response quality is minimal. Additionally, considerations were made for storing certain Large Language Model (LLM) responses, particularly when their reuse wouldn't cause significant unwanted determinism in the app. For instance, health knowledge facts, answerable deterministically, could be retrieved from the database using semantic search methods.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Swift
    • SwiftUI
    • SwiftData
    • HealthKit
    • EventKit
    • WidgetKit
    • OpenAI and OpenAI SDK
    • Sentry
    • Figma
    • Trello
  • NES Health API

    THE STORY BEHIND THE CODE: A DEEP DIVE

    Summary with focus on technical aspects

    Developed the NES Health API, a Retrieval Augmented Generation (RAG) API tailored for the bioenergetics health and wellness industry. Utilized the FastAPI Python framework for expeditious API development with built-in Swagger support. The Langchain framework facilitated seamless integration with the Pinecone Vector Database and allowed for time-efficient implementation of streaming responses feature. FastAPI's REST design, coupled with Swagger support, ensured smooth collaboration with the client's development team. The API allows file loading into Pinecone, extracting raw text, segmenting into chunks, and generating embeddings stored in the Pinecone Vector Database, enabling users not only to filter and retrieve relevant information, but also get answers on their questions derived from the relevant information from the knowledge base. Implemented semantic similarity algorithms and dynamic prompts for precise answers, and deployed the API on AWS EC2.

    App description

    This project entails the development of a Retrieval Augmented Generation (RAG) API featuring diverse endpoints, each tailored to specific functionalities. It is designed for use by a prominent leader in the bioenergetics health and wellness industry. The primary goal is to provide clients with the ability to extract precise answers to specific questions from a comprehensive collection of texts, including various formats such as PDFs, DOCX, and TXT files.

    Diagrams:
    https://miro.com/app/board/uXjVME3pm0w=/?share_link_id=651464295180
    https://miro.com/app/board/uXjVM_PWE3c=/?share_link_id=870386612405

    Technical details

    The choice of the FastAPI Python framework was driven by its expeditious API development capabilities and built-in Swagger support. Following REST design best practices and having the built-in Swagger support resulted in the client’s development team using the API in their system without any friction or need to reach out to our development team. Leveraging the Langchain framework, specifically its Python SDK, alongside the Pinecone Vector Database (and its Python SDK), proved instrumental in achieving the project's objectives.

    One endpoint facilitates loading of files into Pinecone. This process involves extracting raw text from files, utilizing various Document loaders from Langchain, segmenting the text into manageable chunks (guided by the token limit of 4096 tokens on OpenAI models which was the token limit at the time), and generating embeddings using OpenAI's ada embedding model. These embeddings, along with text they are derived from and user-defined metadata, are stored in the Pinecone Vector Database. The reasoning behind using Pinecone Vector Database was it’s attractive feature of filtering embeddings based on user-defined metadata which was needed by the specification of the project.

    Post-file upload, API user gain the ability to filter text chunks based on metadata or retrieve relevant chunks using semantic similarity algorithm. Also we have implemented the option for editing or adding metadata to text chunks. Semantic similarity involves creating embeddings from the API user’s questions and comparing them to existing embeddings of text chunks in Pinecone. API user can obtain answers to his questions by feeding queries and relevant text chunks from Pinecone into the GPT model. The identification of the most relevant chunks is facilitated through semantic similarity. To ensure the model derives answers exclusively from these chunks, dynamic prompts are employed. This involves incorporating instructions within the prompt that guide the model to base its response exclusively on the specified sources. For this purpose, Langchain's Prompt Template option was integrated into the solution (instructions remain consistent for each question, with variations occurring in the questions themselves and the corresponding chunks of information).

    For testing purposes we have uploaded the API on AWS EC2 instance.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Langchain
    • Pinecone Vector Database
    • Python with FastAPI
    • AWS
  • AI Chatbot Arena - Ellie AI

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    App description

    Currently in development, this project aims to empower supporters of AI while also addressing concerns from anti-AI advocates. The overarching objective is to present compelling (counter)arguments highlighting the advantages of AI usage. After discussing what the specific format of the app will be, we have decided it to be a chatbot arena, where the user asks questions and gets two distinct answers to those questions from both perspectives regarding the Risk of using AI. After the initial phase of the project, where we have been constructing a comprehensive knowledge graph that encompasses arguments both in favor of and against the assertion that "AI is safe to use", we have started building UI for chatbot-arena.

    Technical details

    The objectives we have achieved so far are:

    • Produced around 300 arguments that either support or oppose the main assertion. This is done by loading multiple sources of text that deal with topics of risk/benefits of using AI (books, articles, podcast transcripts of leading philosophers/thinkers in this area like David Deutsch, Yan LeCunn…). This part is achieved by using document loaders available in the Langchain framework. These sources are then splitted in manageable chunks of text using Langchain’s Character Text Splitter. After this step we have prompted the GPT-4 model to extract relevant arguments from those chunks of text. For this purpose we have again used Langchain which provides an abstraction layer over OpenAI's SDK that simplifies creation of dynamic prompts with its Prompt Templating functionality.
    • Created embeddings of the arguments and stored them in the Supabase, along with text form of the arguments and other data so we can filter them (is an argument optimistic or pessimistic regarding use of an AI). We have used OpenAI’s ada embedding model for creating embeddings and PGVector extension for PostgreSQL (Supabase is built on top of PostgreSQL) for storing embeddings. We have chosen PostgreSQL(Supabase) for the vector store because it’s stable and provides both semantic similarity and SQL functionalities.
    • Since we had a couple of very similar or same arguments we have run the algorithm that will identify most similar arguments and merge them. We have used a threshold of 94% semantic similarity score for identifying most similar arguments.
    • Matched the best opposing argument for every pessimistic argument. We have done this by prompting the GPT-4 model to create the best opposing argument for every pessimistic argument, then created the embedding of GPT-4 produced argument and compared those embeddings with embeddings of optimistic arguments already stored in Supabase.
    • We have set up the project infrastructure for the chatbot arena. For this purpose we have utilized the Next.js framework. We have set up 2 distinct API routes. Each route generates an answer to users’ question from a certain perspective (optimistic towards the use of AI or pessimistic towards the use of AI). To achieve this we have implemented dynamic prompts. Each time a user asks a question, we create embeddings from that question and run the similarity search on the embeddings of arguments stored in Supabase. This way we detect the most relevant arguments to the user’s question from both perspectives (optimistic and pessimistic). After relevant arguments are detected, they are inserted into two prompts (one for the optimistic chatbot, the second for the pessimistic) along with distinct instructions for both chatbots. After the creation of the dynamic prompts, they are sent to OpenAI’s GPT-4-Turbo model. Generated answers are then streamed to the frontend. For streaming answers, Vercel’s AI package was utilized.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Langchain
    • Supabase (Python SDK)
    • PG Vector PostgreSQL Extension
    • Next.js 14
    • Vercel AI
  • AI IQ Solver

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    Summary with focus on technical aspects

    AI IQ Solver is an application that assesses the problem-solving capabilities of artificial intelligence models in the context of IQ puzzles. Developed using the Next.js framework, the application integrates two software development kits (SDKs), OpenAI and Replicate, and is hosted on the Railway platform.

    App description

    AI IQ Solver is an application designed to assess the problem-solving capabilities of artificial intelligence models in the context of IQ puzzles.

    Link to the app: https://ai-iq-solver-production.up.railway.app/
    YouTube video featuring the app: https://youtu.be/QrSCwxrLrRc?si=2YsbFE2-a5sBDYkB
    Wireframe/Diagram: https://miro.com/app/board/uXjVMwx38UA=

    Technical details

    Developed using the Next.js framework, the application incorporates an API that integrates two software development kits (SDKs), namely OpenAI and Replicate. The OpenAI’s Node SDK is employed to establish communication with the GPT-4 model, while the Replicate’s Node SDK is utilized for interfacing with the Mini-GPT-4 multimodal model. The reason for using two models is that, at the time, OpenAI models were not capable of processing multimodal inputs. Users are required to submit a collection of .txt and image files (.jpeg, .png) containing IQ puzzles. The application categorizes the puzzles into textual and visual formats, directing textual puzzles to the GPT-4 model and visual puzzles (comprising both text and image components) to the Mini-GPT-4 model. The responses generated by the models are subsequently presented within the application interface. The user interface of the app is built using the Material UI library. For deployment, the AI IQ Solver is hosted on the Railway platform, ensuring efficient and scalable access.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Next.js
    • OpenAI Node SDK
    • Replicate Node SDK
    • GPT-4
    • Mini-GPT-4
    • Railway
  • AlexAI

    Instant answers to complex energy and climate questions with AlexAI

    OpenAI, Langchain, Python, AsyncIO, Supabase, Swift, Railway, and others

    AlexAI

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    Summary with focus on technical aspects

    Developed AlexAI, an AI-powered chatbot that provides instant answers to complex energy and climate questions. The backend was built with FastAPI and deployed seamlessly on Railway cloud for scalability, whereas the frontend was customly designed. Supabase was a choice for managing the database, streamlining authentication and cloud storage. The Langchain framework optimized LLM integration for text summarization without a need for some low-level custom implementations. The iOS app was developed, initially leveraging WebView for the sake of going early to the market, later decided to transition to a fully-fledged natively coded mobile app. Technologies included OpenAI, Langchain, Python, AsyncIO, Supabase, Swift, Railway, and others.

    App description

    Instant answers to your questions on energy, environmental, and climate issues—based on the mind and work of Alex Epstein, pro-human philosopher and energy expert. AlexAI uses a custom GPT model to be able to handle even the most complex energy and climate questions. It is using more 150 processed sources including Alex’s blog posts, interviews, podcasts and books.

    AlexAI critically analyzes every user message and can identify and re-frame questions with bad underlying assumptions. The example that reflects quality of the chatbot responses is seen in the answer below:

    Web App is live on: https://alexgpt.ai/
    iPhone and iPad app is live on: https://apps.apple.com/gb/app/alexai/id6448963081
    Blog post about AlexAI: https://alexepstein.substack.com/p/your-exclusive-early-access-to-alexai?utm_campaign=email-post&r=7b6oh&utm_source=substack&utm_medium=email

    Technical details

    The app is written as a FastAPI backend which incorporates server-side rendering functionality. It is deployed using Railway cloud service which offers effortless scalability out-of-the box and fast deployment iterations. AlexAI implementation utilizes Supabase as a service which takes care of hosting database and interacting with it by means of REST API. It also speeds up the development process by offering various solutions for authentication, authorization and cloud file storage.

    The frontend is written using plain HTML, CSS and JavaScript and it based on custom designs provided by a UI designer. There was no need for fully-fledged frontend frameworks or CSS frameworks because there was no complex user interactivity with the web app.

    The design requirements underwent several adjustments during the development process, and new features were constantly being brainstormed. Effectively managing these changes without notable disruptions and the need for rewriting code became a continuous attestation of the development team's commitment to clean coding practices. This approach facilitated quick iterations, which were crucial given the fast-paced advancements in state-of-the-art LLM (Large Language Model) development and related technologies. One notable instance of progress in LLMs and related technologies impacting the development process are the changes in the Langchain framework. This framework acts as a sophisticated abstraction layer, streamlining the utilization of LLM technologies in various applications. New Langchain features were always on the development team’s radar because abstracting away lower level data processing implementations is more time effective than writing custom code and also reduces code complexity. For example, the development team did not customly implement map reduce data processing technique, but rather used Langchain’s implementation, when experimenting with different text summarization methods.

    The communications with the designer were assisted via tool named Figma with which the friction between developers receiving designer’s deliverables and feedback was decreased to a minimum. Initially, Trello was a choice of project management platform, but the curiosity of the team lead to using Linear as a new emerging technology. It helped automate tickets creation through simple Slack messaging and tickets closures through GitHub pull requests.

    In addition to building the web app, an iOS app was also built. Because the web app architecture was built around service-side rendering paradigm, the fastest way to get the iOS app to market was utilizing the WebView functionality offered by Apple’s WebKit framework. Due to concerns regarding the inefficiency, extensibility, maintenance and seeing infrastructure and code complexity rising up, associated with the current approach, a decision has been made to develop a fully-fledged mobile app in the future. This will be accomplished either through native Swift code without the use of WebView or by employing cross-platform solutions such as React Native or Flutter.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • OpenAI
    • Langchain
    • Python with FastAPI
    • AsyncIO
    • Supabase (which includes Postgres database)
    • Swift
    • RevenueCat
    • ChromaDB vector database
    • Railway
    • Sentry
    • Figma
    • Trello and Linear
About Me

I'm Jasmin

Full-Stack Developer

Solutions Expert

From the first meeting, first line of code to a fully tested deployed app, I can lead, design and build throughout the entire process.

I have over 8 years experience working directly with founders/entrepreneurs building successful innovative applications. I am proud of my client’s success and that I have helped them achieve their dreams.

Technology Stack Master List

I am committed to continuous learning and harness language-neutral coding dexterity, allowing me to easily adapt to new technologies. Below you can find the list of technologies I am good at and an intuition of how much time I need to spend to learn new technologies.

Frameworks, technologies and libraries that I am proficient at:
  • React (javascript)
  • Node.js (javascript)
  • Express (javascript)
  • Next.js (javascript)
  • FastAPI (python)
  • MySQL
  • PostgreSQL
  • MongoDB
  • InfluxDB
  • Mongoose (javascript)
  • Langchain (python and javascript)
  • Redux (React/javascript/typescript)
  • jQuery (javascript)
  • Material UI (React/javascript)
  • Supabase
  • Pinecone
  • Typescript (javascript)
  • PDFKit (javascript)
  • Sequelize (javascript)
  • SwiftUI (swift)
  • SwiftData (swift)
  • Next UI (Next)
  • Chakra UI (React/Next.js)
Frameworks, technologies and libraries that I have some experience in:
  • Bootstrap (html & css & javascript)
  • React Native (javascript)
  • AWS
  • Railway
  • Vercel
  • Serverless
  • Selenium (python)
  • Mocha.js (javascript)
  • Electron (javascript)
  • Jest (javascript)
  • Loopback (javascript)
  • d3.js (javascript)
  • Lodash (javascript)
  • Azure
  • Chart.js (javascript)
  • Tailwind (html & css)
  • TailwindUI (react)
  • Moment.js (javascript)
  • Django (python)
  • Laravel (php)
Frameworks, technologies and libraries that are popular and I could learn in a week:
  • Ember.js (javascript)
  • Polymer.js (javascript)
  • Meteor (javascript)
  • Svelte (javascript)
  • Gatsby.js (javascript)
  • Vue.js (javascript)
  • Flask (python)
  • Bottle (python)
  • Code Igniter (php)
  • Symfony (php)
  • LlamaIndex (python)
  • Chai.js (javascript)
  • Anime.js (javascript)
Frameworks, technologies and libraries that are popular but would take at least a month to become proficient in:
  • Angular (typescript)
  • Zend (php)
  • Ionic (javascript)

My Story

I have always had a passion for technology whether it was disassembling mechanical toys as a child, learning HTML/CSS in high school, or my current life-long career in information technology. In 2016, I joined an 18 month professional software training program called BILD-IT financed by the British Embassy. I was so impressed with the quality and dedication of my teachers that, after completing the training, I joined the teaching staff while simultaneously launching my career as a professional freelancer.

BILD-IT is sustained by a group of part-time trainers. In 2018, myself and two other teachers started the freelancers’ conglomerate called TIKA Technologies. TIKA now has a four person core group of developers and we can always reach out to the many professional developers that graduated from BILD-IT when we have a coding challenge or need to add to our team. Together we provide clients with high quality solutions while dedicating part of our time to teaching and helping others start their IT career.

Here Is Me Teaching Object-Oriented Programming In 2022

2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2
Testimonial

What My Clients Say

Contact

Let's Work Together

If you are an entrepreneur, a founder of a small to medium sized private business, if you have a new idea or a new feature you want to build, I would enjoy discussing your current challenge and the possibility of working with you.

(This personal portfolio page was created specifically for Upwork clients)