The error message “Exceeded threshold for number of bad records” in AWS Machine Learning indicates that during data ingestion or processing, a significant number of records didn’t meet the expected format or quality. AWS ML enforces a threshold for the number of such “bad records” that can be tolerated before the process is halted to prevent inaccurate or unreliable model training.
Here’s a general explanation of how to handle this error, along with an example of what might lead to it:
Example Scenario: Suppose you are using AWS Machine Learning to build a predictive model to classify customer reviews as either positive or negative based on the text content. You provide a dataset containing review text and corresponding labels.
Issue: However, a portion of the review data contains corrupted or invalid text that cannot be processed correctly. This could be due to text encoding issues, special characters, or other inconsistencies.
- Data Preprocessing: Before sending the data to AWS ML, perform thorough data preprocessing to clean and sanitize the text data. This includes removing special characters, ensuring consistent text encoding, and handling missing or erroneous data points.
- Data Inspection: Inspect your dataset to identify and correct any problematic records. You can visualize or analyze the records causing issues to better understand the nature of the problem.
- Quality Check: Implement a quality check or validation step before sending data to AWS ML. This can involve using regular expressions or custom logic to identify and exclude records that don’t meet the expected format.
- Data Sampling: If the dataset is large, consider randomly sampling a subset of the data and performing data quality checks on the sampled subset to quickly identify issues.
Here’s a simplified code snippet to illustrate the preprocessing step:
import pandas as pd from sklearn.model_selection import train_test_split # Load the dataset data = pd.read_csv('reviews.csv') # Data preprocessing: clean and sanitize text data def preprocess_text(text): # Implement your text preprocessing logic here cleaned_text = ... # Clean the text return cleaned_text data['cleaned_text'] = data['review_text'].apply(preprocess_text) # Split dataset into training and testing sets train_data, test_data = train_test_split(data, test_size=0.2, random_state=42) # Save the cleaned dataset train_data.to_csv('cleaned_train_data.csv', index=False) test_data.to_csv('cleaned_test_data.csv', index=False)
By performing proper data preprocessing and quality checks, you can reduce the likelihood of encountering the “Exceeded threshold for number of bad records” error in AWS Machine Learning.