Abstract:
Automatic essay scoring (AES) has gained significant popularity in recent years as it provides an efficient and unbiased means of evaluating student writing. Efficient and objective assessment of student writing is essential for educators, as it offers valuable feedback that can aid students in enhancing their writing abilities and achieving academic success. This study offers an extensive examination of several AES frameworks, investigating their performance indicators, underlying algorithms, and suitability for use in a range of educational contexts. The paper examined the three main frameworks of AES, which include content-based, machine learning (ML), and hybrid methods, highlighting the benefits and drawbacks of each. In the end, this analysis hopes to improve automated grading technologies and their integration into educational practices by providing educators, policymakers, and technologists with information about the strengths and weaknesses of AES frameworks through the synthesis of recent research and developments.