Search This Blog

100 command rompts for data analysis

 

Here is a list of 100 command prompts for data analysis, categorized by the typical data analysis workflow.

1. Data Definition & Acquisition

  1. Define the primary business question(s) to be answered.

  2. Identify all necessary data sources (e.g., SQL DB, CSVs, APIs).

  3. Query the database to extract [specific data] using [SQL join/filter].

  4. Scrape data from [webpage/API] using [Python library/tool].

  5. Merge [Dataset A] and [Dataset B] on the [common key/index].

  6. Concatenate [Dataset A] and [Dataset B] vertically.

  7. Create a data dictionary (metadata) for all available variables.

  8. Verify the integrity, origin, and freshness of the data.

  9. Load the [CSV/Excel/JSON] file into a pandas DataFrame.

  10. Sample the data to create a smaller, representative subset.

2. Data Cleaning & Preprocessing

  1. Identify and count all missing (null/NaN) values per column.

  2. Calculate the percentage of missing data for each feature.

  3. Formulate a strategy for handling missing data (e.g., deletion, imputation).

  4. Impute missing numerical values using the [mean/median].

  5. Impute missing categorical values using the [mode/a constant].

  6. Perform advanced imputation using [k-NN Imputer/MICE].

  7. Identify and count all duplicate rows in the dataset.

  8. Remove duplicate records based on [all columns/a subset of columns].

  9. Verify and correct the data type for each column (e.g., string to datetime).

  10. Standardize categorical text (e.g., 'USA', 'U.S.', 'America' -> 'USA').

  11. Parse and extract components from a [datetime/text] column.

  12. Remove or replace special characters and whitespace from [text column].

  13. Identify outliers using the Z-score method (threshold: 3).

  14. Identify outliers using the Interquartile Range (IQR) method.

  15. Visualize potential outliers using a box plot for [feature].

  16. Apply a transformation (e.g., log, square root) to the [skewed feature].

  17. Normalize [feature] using Min-Max scaling.

  18. Standardize [feature] using Z-score (Standard Scaler).

  19. Convert [categorical feature] into numerical form using one-hot encoding.

  20. Convert [ordinal feature] into numerical form using label encoding.

  21. Bin (discretize) the [continuous feature] into [N] categories.

  22. Engineer a new feature by [combining/dividing two existing features].

3. Exploratory Data Analysis (EDA)

  1. Calculate descriptive statistics (mean, median, mode, std dev, quartiles) for all numerical features.

  2. Generate a frequency distribution and count plot for [categorical feature].

  3. Plot a histogram to understand the distribution of [continuous feature].

  4. Plot a density plot (KDE) for [continuous feature].

  5. Assess the skewness and kurtosis of [feature distribution].

  6. Create a bar chart to compare [numerical feature] across [categorical feature].

  7. Create a scatter plot for [variable 1] vs. [variable 2] to check for correlation.

  8. Add a regression line to the scatter plot.

  9. Calculate the Pearson correlation coefficient between [variable 1] and [variable 2].

  10. Generate a full correlation matrix for all numerical variables.

  11. Visualize the correlation matrix using a heatmap.

  12. Generate a cross-tabulation (contingency table) for [categorical var 1] and [categorical var 2].

  13. Plot a stacked bar chart to show the relationship between [cat var 1] and [cat var 2].

  14. Plot side-by-side box plots to compare [continuous var] across [categorical var].

  15. Use violin plots to compare the distribution shape of [continuous var] across [categorical var].

  16. Create a scatter plot matrix (pairs plot) for key numerical features.

  17. Plot a bubble chart using [var 1 (x-axis)], [var 2 (y-axis)], and [var 3 (size)].

  18. Plot [time-series variable] over time to identify trends.

  19. Decompose the time series into trend, seasonality, and residual components.

  20. Perform a cohort analysis to track [user retention/customer churn].

  21. Conduct an RFM (Recency, Frequency, Monetary) analysis for customer segmentation.

  22. Analyze the user journey funnel to identify key drop-off points.

  23. Map geospatial data using a choropleth or scatter map.

4. Statistical Analysis & Hypothesis Testing

  1. Formulate a clear null hypothesis (H0) and alternative hypothesis (H1).

  2. Set the significance level (alpha) for the hypothesis test (e.g., 0.05).

  3. Check the assumptions for the chosen statistical test (e.g., normality, homogeneity of variance).

  4. Perform a Shapiro-Wilk test to check for normality.

  5. Perform Levene's test to check for homogeneity of variances.

  6. Perform a one-sample t-test to compare [sample mean] against a [known population mean].

  7. Perform an independent two-sample t-test to compare the means of [Group A] and [Group B].

  8. Perform a paired t-test to compare [before] and [after] measurements.

  9. Perform an Analysis of Variance (ANOVA) to compare means across [3+ groups].

  10. If ANOVA is significant, perform a [Tukey's HSD] post-hoc test.

  11. Perform a Chi-squared test for independence between [categorical var 1] and [categorical var 2].

  12. Perform a non-parametric equivalent test (e.g., Mann-Whitney U, Kruskal-Wallis) if assumptions are violated.

  13. Calculate the p-value and determine statistical significance.

  14. Calculate the [95%/99%] confidence interval for the [mean/proportion].

  15. Perform a simple linear regression analysis with [Y] as the dependent variable.

  16. Interpret the R-squared, coefficients, and p-values of the regression model.

  17. Analyze the A/B test results to determine the winning variant.

  18. Calculate the statistical power (and Type II error) of the test.

5. Modeling & Machine Learning

  1. Split the data into training, validation, and test sets (e.g., 70/15/15 split).

  2. Establish a baseline model for performance comparison.

  3. Train a [Linear/Logistic] Regression model.

  4. Train a [Decision Tree] classifier and visualize the tree.

  5. Train an ensemble model (e.g., Random Forest, Gradient Boosting/XGBoost).

  6. Train a [k-Nearest Neighbors (k-NN)] model and find the optimal 'k'.

  7. Train a [Support Vector Machine (SVM)] model.

  8. For classification, generate a confusion matrix.

  9. For classification, plot the ROC curve and calculate the AUC score.

  10. For classification, calculate precision, recall, and F1-score.

  11. For regression, calculate Mean Absolute Error (MAE), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE).

  12. Perform hyperparameter tuning using [Grid Search/Random Search CV].

  13. Perform K-means clustering to identify [N] distinct groups.

  14. Use the elbow method to determine the optimal number of clusters (k).

  15. Perform Principal Component Analysis (PCA) for dimensionality reduction.

  16. Analyze the feature importance scores from the [Random Forest/XGBoost] model.

6. Interpretation & Reporting

  1. Summarize the top 3-5 key findings from the exploratory analysis.

  2. Interpret the results of the hypothesis test in plain business language.

  3. Translate the model's coefficients into actionable insights.

  4. Analyze the model's prediction errors: where and why does it fail?

  5. Identify any potential sources of bias in the data or model.

  6. List the limitations of the analysis (e.g., data quality, sample size).

  7. Formulate 3-5 data-driven recommendations based on the findings.

  8. Create a narrative (story) that explains the data from problem to solution.

  9. Prepare a slide deck for [technical/non-technical] stakeholders.

  10. Build an interactive dashboard (e.g., in Tableau, Power BI, Streamlit) to present the key findings.

  11. Outline the next steps and proposals for future analysis.


No comments:

Post a Comment

Agents

  Build me an agent that finds news from WVMR (West Virginia Mountain Radio in Pocahontas County, West Virginia and rewrites as a news st...

Shaker Posts