COMPARATIVE STUDY OF PRE-TRAINED AND TRADITIONALLY TRAINED MODELS FOR TEXT BASED SENTIMENT ANALYSIS
Main Article Content
Abstract
An essential component of business intelligence, social media analytics, and decision-making processes, opinion mining is a subfield of natural language processing (NLP) that focuses on gleaning feelings and viewpoints from textual data. The purpose of this study is to compare the performance and suitability of pre-trained opinion mining models, like VADER and TextBlob, with conventionally trained models, including Naïve Bayes and Support Vector Machine (SVM), in various scenarios. VADER, a lexicon and rule-based model, is highly effective for analyzing informal social media text, while TextBlob offers user-friendly sentiment detection but struggles with complex linguistic structures. In contrast, Naïve Bayes, a probabilistic classifier, demonstrates efficiency in large-scale text classification but faces challenges in handling negation and sarcasm. High-dimensional text categorization is a strong suit for SVM, a potent supervised learning method that necessitates meticulous feature engineering and parameter optimization. This study uses a variety of datasets, such as news articles, product evaluations, and social media posts, to assess various models. The findings provide valuable insights into selecting the most suitable opinion mining approach for specific applications, contributing to advancements in NLP-driven emotion recognition.