The customer closely works with the Graders and Trainers. They offer a platform that assists in content creation/curation, online teaching, online course management, one-on-one student support, and technology solutions for optimizing student learning. The customer reached out to us to address the challenges educators face in the grading process. They wanted a solution through which they could optimally use their time and eliminate issues pertaining to mechanical and time-consuming tasks. Additionally, they were aiming to implement a standardised evaluation system to ensure better time management and efficiency.
Our customer is a US-based EdTech company offering a full range of services to help partners develop and deliver online courses. It provides instructional design, course support and tech solutions to OPMs (Online Program Management), Higher Education Institutions, and Online Course Providers. With a strong focus on optimizing student learning, their innovative technology solutions have impacted over 3 million students across the United States.
The customer’s platform is used by Graders and Trainers for teaching course management, student learning and evaluating progress. The trainers and graders regularly encounter many issues around grading assignments. Grading essays is a laborious and time-consuming task for educators, and it becomes tough for the grader to view all assignments with the same level of attention as the first few. It also becomes tedious to keep track of different sets of Rubrics for assessing the different types of assignments.
Instead of spending a significant amount of time reviewing each essay and providing detailed feedback, they wanted to automate certain tasks, freeing up more time for educators to focus on higher-level tasks, including teaching and student learning. The goal was to build an Essay Grader MVP by leveraging GenerativeAI tools like ChatGPT. The proposed solution could evaluate the assignments faster by providing a suggested grade according to the rubrics provided.
The Velotio team is very hands-on and experts in Generative AI products. We wanted to set up a standardised evaluation system but didn’t know how to go about it. The Velotio team owned the whole process, from creating a roadmap to flawlessly developing our Grader MVP in a short span. We were quite impressed with the outcome and how it solved many issues, our educators faced.
The aim was to empower graders and trainers to improve efficiency and productivity by developing AI-powered tools. The assistive technology can reduce human error, giving consistent, repeatable results with remarkable performance.
They partnered with us, considering our vast expertise in building products and expert systems driven by AI and ML technologies like deep learning models (CNNs, RNNs, LSTMs and SOMs), Natural Language Processing (NLP) using open-source and proprietary LLMs (GPT-4, ChatGPT), TensorFlow, PyTorch, computer vision using DALL-E 2, OpenCV, CUDA, Keras, Midjourney and Stable Diffusion and speech recognition.
We wanted to integrate AI tools with advanced natural language processing capabilities. This could help analyze and understand complex topics. The goal of the platform was to build a solution that could evaluate the essays on the parameters, including structure, coherence and rubrics to meet the requirements of different academic levels and disciplines.
Keeping in mind the constricted timeline, we quickly set up a team comprising Backend Engineer, Frontend Engineer, Data Scientist, and Team Leads for the project. They kickstarted the project by creating a comprehensive roadmap and evaluating the potential challenges that needed to be tackled.
They determined the right tech stack to be used for both The Grader Dashboard and Trainer Dashboard.
We identified the best AI model and different tools required by carefully considering the budget and the requirements. Training the AI model can be quite cost-intensive, and choosing the right AI training data is crucial for any successful application. The team identified the size of the training data, accurate information and data depicting all the test cases which will be used to train the Grader model driven by AI.
We set up a proper testing environment for the AI model, which was totally detached from the main working model. This ensured seamless training and testing of the model from time to time to make it more efficient.
Once we set up the Grading tool, regular monitoring was needed to evaluate the system. We architectured a good trainer feedback system so the trainers could check the discrepancies reported by the graders. It further checked whether the feedback was precise and accurate, which could be used to make the Trainer model better.
We then defined and implemented the KPIs for internal analytics and business reporting. To ensure that all stakeholders are equipped to use the new tool, we developed training and documentation for graders/trainers. We conducted regular demo/review sessions with stakeholders/product owners and thorough end-to-end testing was done before go-live.
The Essay Grader MVP reduced the essay evaluation time from 45 mins to 3-5 mins.
The graders reported a drastic increase in productivity and reduced workload.
Recorded 93% accuracy in evaluating essays based on rubrics.