Search This Blog

Command prompts for "data analysis"

 

That's a massive request! Generating 100 specific, unique, and universally applicable command prompts for "data analysis" is challenging because the exact commands depend heavily on the specific software, programming language, and environment (e.g., Python/Pandas, R, SQL, MATLAB, a statistical package like SPSS or SAS, or a command-line tool like Awk/Sed) being used.

However, I can provide 100 distinct conceptual tasks or operations common in data analysis, phrased as prompts you would translate into a specific command in your chosen environment.

Here are 100 Conceptual Command Prompts for Data Analysis, broken down by common analytical phases:


Data Acquisition & Inspection (1-20)

These focus on loading data and getting an initial look at its structure.

  1. Load the primary dataset named sales_data.csv into a dataframe.

  2. Display the first 10 rows of the dataframe.

  3. Show the last 5 rows of the dataframe.

  4. Print the total number of rows and columns (the shape/dimensions).

  5. Summarize the data types (schema) of all columns.

  6. List all the column names in the dataset.

  7. Calculate the memory usage of the dataframe.

  8. Count the total number of non-missing values per column.

  9. Display a quick statistical summary (mean, std, min, max, quartiles) for all numerical columns.

  10. Check for the presence of duplicate rows.

  11. Display the unique values and their counts for the region column.

  12. Read a dataset from a SQL query using a connection string.

  13. Import data from an Excel file specifically from the sheet named "Q3_Results."

  14. Inspect the date range of the transaction_date column.

  15. Convert the entire dataframe to a dictionary of records.

  16. Show the metadata or header information of the underlying data file.

  17. Preview the raw text content of the first few lines of a log file.

  18. Establish a live connection to a cloud database (e.g., BigQuery, S3).

  19. List all files in the current working directory that end with .json.

  20. Assign a new index to the dataframe starting from 1.


Data Cleaning & Preparation (21-40)

These focus on handling missing values, duplicates, and converting data types.

  1. Drop all rows that contain any missing values (NaN).

  2. Fill missing values in the customer_age column with the mean age.

  3. Replace all instances of the string 'N/A' with actual missing values (NaN).

  4. Drop the customer_id column as it's not needed for analysis.

  5. Remove any fully duplicated rows.

  6. Convert the price column from a string to a float data type.

  7. Standardize the case of the product_name column to lowercase.

  8. Extract the year from the order_date column into a new column.

  9. Split the full_name column into two new columns: first_name and last_name.

  10. Remove leading/trailing whitespace from the category column.

  11. Filter the data to keep only transactions where status is 'Completed'.

  12. Identify and count any outliers in the revenue column using the IQR method.

  13. Apply a log transformation to the sales_volume column.

  14. Recode the values in the gender column ('M', 'F') to (0, 1).

  15. One-hot encode the marital_status categorical column.

  16. Bin the continuous age column into 5 equal-width bins.

  17. Standardize (Z-score normalize) the income column.

  18. Impute missing values in the city column using the most frequent (mode) city.

  19. Validate that the sum of cost and profit equals the price for every row.

  20. Rename the column txn_id to transaction_identifier.


Data Exploration & Manipulation (41-70)

These focus on slicing, aggregating, pivoting, and exploring relationships.

  1. Filter the data for transactions in 'California' OR 'Texas'.

  2. Select only the date, product, and quantity columns.

  3. Sort the entire dataset by transaction_amount in descending order.

  4. Calculate the total sales across the entire dataset.

  5. Find the average rating for the product 'X-2000'.

  6. Group the data by category and calculate the mean price for each category.

  7. Find the maximum profit achieved by any single transaction.

  8. Count the number of unique customers.

  9. Create a cross-tabulation (contingency table) of region and product_type.

  10. Merge the current dataframe with a customer_details dataframe using the customer_id as the key (left join).

  11. Append a new dataset (Q4_data) to the bottom of the current dataframe.

  12. Pivot the data to show product_type in the index, year in the columns, and the sum of sales as the values.

  13. Calculate the percent difference in sales between the current year and the previous year.

  14. Identify the top 5 products by total revenue.

  15. Calculate a rolling 7-day average of the daily_visitors column.

  16. Apply a custom function to clean text data in the notes column.

  17. Sample 10% of the data randomly.

  18. Shift the stock_price column by one row to enable comparison with the next day's price.

  19. Generate a cumulative sum of the daily_views column.

  20. Group by country and return the name of the city with the highest sales within each country.

  21. Calculate the interquartile range (IQR) for the delivery_time column.

  22. Compute the correlation matrix for all numerical variables.

  23. Create a new column that categorizes transactions as 'High Value' (\le 500$).

  24. Select rows where product_type is 'Electronics' AND quantity is greater than 10.

  25. Calculate the mode of the payment_method column.

  26. Rank the products based on their total profit.

  27. Calculate the variance of the inventory_level column.

  28. Compute the coefficient of variation for the monthly_expense column.

  29. Filter the data to exclude the category 'Returns'.

  30. Perform a full outer join between the main dataset and a supplier_info table.


Statistical Modeling & Visualization (71-100)

These focus on advanced analysis, model preparation, and output.

  1. Run a simple linear regression with sales as the dependent variable and advertising_spend as the independent variable.

  2. Fit a -means clustering model with to the feature data.

  3. Calculate the -value from a test between gender and purchase_decision.

  4. Perform an ANOVA test to compare the mean scores across three different treatment groups.

  5. Split the dataset into training (80%) and testing (20%) sets.

  6. Calculate the precision, recall, and F1-score for a classification model.

  7. Visualize the distribution of the income column using a histogram.

  8. Generate a scatter plot of price vs. rating.

  9. Create a box plot to show the distribution of salary across different departments.

  10. Plot a time series chart of daily_visitors.

  11. Display the coefficients and intercept of the fitted regression model.

  12. Compute the eigenvalues and eigenvectors of a covariance matrix for Principle Component Analysis (PCA).

  13. Plot a bar chart showing the total quantity sold per country.

  14. Save the resulting cleaned dataframe to a new CSV file named cleaned_data.csv.

  15. Export the summary statistics of the model to a text file.

  16. Generate a heatmap of the correlation matrix.

  17. Calculate the root mean squared error (RMSE) of the model predictions.

  18. Perform a Grubbs' test to statistically identify outliers.

  19. Save the resulting visualization (e.g., the bar chart) as a PNG image file.

  20. Calculate the log-likelihood of a fitted model.

  21. Perform a feature selection using Recursive Feature Elimination (RFE).

  22. Apply a MinMaxScaler to the numerical features.

  23. Generate a -fold cross-validation split ().

  24. Display the ROC curve for a binary classification model.

  25. Serialize (pickle) the trained machine learning model for later use.

  26. Compute the variance inflation factor (VIF) for model variables to check for multicollinearity.

  27. Execute a t-test to compare the mean response_time of two different server groups.

  28. Aggregate the data by week and find the sum of sales for each week.

  29. Save the output of a lengthy group-by calculation to an intermediate file (e.g., HDF5).

  30. Print the memory address of the current dataframe object (for debugging/performance).

No comments:

Post a Comment

Theophany Sermon

The Doctrine of Theophany: A Comparative Systematic Analysis of Divine Revelation and Immanence Across Patristic and Denominational Traditio...

Shaker Posts