Computer system issues can arise in any organization, including in a retail setting like "Fresh Retail.
Data Loss and Backup Management
Data is a valuable asset for businesses, and protecting it from loss or damage is critical for the continuity of operations (Akhtar et al., 2012). Fresh Retail store, like any other organization, must establish a robust backup strategy to safeguard its data from potential loss or corruption. Develop automated backup scripts that regularly backup critical data to secure servers or cloud storage. These scripts can be scheduled to run at specific intervals, ensuring that essential data is always backed up and can be restored quickly in case of data loss.
#!/bin/bash
# Set the date and time for the backup file
BACKUP_DATE=$(date +"%Y%m%d")
BACKUP_TIME=$(date +"%H%M%S")
BACKUP_DIR="./"
# Create a new directory for today's backup
mkdir -p $BACKUP_DIR/day_$BACKUP_DATE
# Use pg_dump to create a backup of the database
pg_dump -U thienhang -h localhost:5432 thienhang >$BACKUP_DIR/day_$BACKUP_DATE/db_backup_$BACKUP_TIME.sql
Fig 1. This script creates a backup of the "thienhang" PostgreSQL database
Inventory Management and Stockouts
Inadequate inventory management can result in overstocking or stockouts, affecting sales, customer satisfaction, and cash flow. Create scripts that automate the monitoring of inventory levels, generate alerts for low-stock items, and even automate orders for replenishment based on predefined thresholds. This helps maintain optimal inventory levels, reducing the risk of stockouts and overstocking.
#!/bin/bash
# PostgreSQL settings
DB_HOST="10.1.1.0"
DB_PORT="5432"
DB_NAME="thienhang"
DB_USER="thienhang"
DB_PASSWORD="thienhang"
# Threshold for low stock
LOW_STOCK_THRESHOLD=10
ID=1
# Function to send a notification
send_notification() {
local message="$1"
echo "Sending notification: $message"
# Add logic to send a notification (e.g., email, SMS, etc.)
# For simplicity, I'll just print the message here
echo "$message"
}
# Check for low stock in the PostgreSQL database
check_low_stock() {
local query="SELECT item_id, item_name, stock FROM inventory WHERE stock < $LOW_STOCK_THRESHOLD AND ID =1;"
local result=$(psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -c "$query" -tA)
if [[ -n "$result" ]]; then
echo "Low stock items:"
echo "$result"
send_notification "Low stock items:\n$result"
else
echo "No low stock items found."
fi
}
check_low_stock
Fig 2. This script checks for low stock in a PostgreSQL database and sends a notification when the stock is below a specified threshold
Security and Data Breach Prevention
Retail organizations handle sensitive customer data and financial information, making them prime targets for cyber-attacks and data breaches. Create security scripts that automate routine security checks, monitor system logs for suspicious activities, and apply security patches and updates in a timely manner. These scripts enhance the organization's security posture and help in early detection and prevention of potential security threats.
#!/bin/bash
# PostgreSQL log file
PG_LOG_FILE="pg.log"
# Function to send a notification
send_notification() {
local message="$1"
echo "Sending notification: $message"
# Add logic to send a notification (e.g., email, SMS, etc.)
# For simplicity, I'll just print the message here
echo "$message"
}
# Monitor the PostgreSQL log for abnormal events
tail -Fn0 "$PG_LOG_FILE" | while read -r line; do
if [[ "$line" == *"ERROR"* || "$line" == *"FATAL"* || "$line" == *"PANIC"* ]]; then
send_notification "Abnormal PostgreSQL activity: $line"
fi
done
Fig 3. This script sends a notification for abnormal PostgreSQL log entries
Task Automation and Employee Productivity
Repetitive manual tasks can consume employee time and lead to inefficiencies and errors. Develop scripts to automate routine tasks like generating sales reports, processing orders, or managing employee schedules. By automating these tasks, employees can focus on higher-value activities, improving overall productivity and reducing the risk of errors.
#!/bin/bash
# PostgreSQL settings
DB_HOST="10.1.1.0"
DB_PORT="5432"
DB_NAME="thienhang"
DB_USER="thienhang"
DB_PASSWORD="thienhang"
# Date range for the report (replace with your desired dates)
START_DATE="2023-09-01"
END_DATE="2023-09-30"
# Output file
OUTPUT_FILE="sales_report.csv"
# Generate the sales report and save to CSV
psql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME -c "
COPY (
SELECT date, product, quantity, revenue
FROM sales
WHERE date >= '$START_DATE' AND date <= '$END_DATE'
) TO STDOUT DELIMITER ',' CSV HEADER;" >"$OUTPUT_FILE"
echo "Sales report generated and saved to $OUTPUT_FILE."
Fig 4. This script generates a simple CSV sales report for a specific date range
Customer Service and Communication
Ineffective communication with customers can result in dissatisfaction and loss of business. Utilize scripts to automate personalized email or SMS notifications for order updates, promotions, or feedback requests. These automated communication scripts enhance customer engagement and satisfaction.
System Performance Monitoring
Unoptimized system performance can lead to slow response times and hinder employee productivity. Develop monitoring scripts that continuously track system performance metrics, identify bottlenecks, and trigger alerts for potential performance issues. These scripts enable proactive performance optimization and ensure a smooth running of critical applications.
#!/bin/bash
# Function to send a notification
send_notification() {
local message="$1"
echo "Sending notification: $message"
# Add logic to send a notification (e.g., email, SMS, etc.)
# For simplicity, we'll just print the message here
echo "$message"
}
# Function to collect system performance metrics
collect_system_metrics() {
local date_time=$(date +"%Y-%m-%d %H:%M:%S")
local cpu_usage=$(top -b -n 1 | grep "Cpu(s)" | awk '{print $2}' | awk -F. '{print $1}')
local memory_usage=$(free -m | grep Mem | awk '{print $3}')
local disk_usage=$(df -h / | awk 'NR==2{print $5}')
echo "Date/Time: $date_time"
echo "CPU Usage: $cpu_usage%"
echo "Memory Usage: ${memory_usage}MB"
echo "Disk Usage: $disk_usage"
}
# Monitor system performance at regular intervals
while true; do
collect_system_metrics
# For demonstration, we'll send a notification every 5 minutes
sleep 300
done
Fig 5. This script tracks system performance metrics
In summary, using scripts to automate, monitor, and streamline various processes within a retail organization can significantly enhance operational efficiency, data security, customer satisfaction, and overall performance while reducing the risks associated with manual interventions and system issues.
Research documentation for the ps command. Discuss at least three useful things the ps command can report. (Refer to The Linux command line, Chapter 10: Processes).
The ps command in Linux is a fundamental tool for examining and managing processes. It offers valuable insights into the system's process landscape.
According to William, the ps command displays unique Process IDs (PIDs) assigned to each running process. These PIDs are essential for identifying and managing processes. For instance, in a retail environment, suppose a sales management application is running. Using ps to list processes can provide PIDs associated with this application.
Fig 1. This command will display a list of processes matching name “shop.thienhang.com", along with their associated PIDs, user information, and other details.
> ps aux | grep shop.thienhang.com
tian 1201637 0.0 0.0 21148 2240 pts/1 S+ 21:10 0:00 grep --color=auto shop.thienhang.com
This line shows the details of the grep process that is filtering for "shop.thienhang.com".
tian is the username of the user running the grep command.
1201637 is the PID of the grep process.
0.0 is the CPU usage for the grep process.
21148 is the virtual memory size.
2240 is the resident set size.
pts/1 indicates that the process is associated with a pseudo-terminal.
S+ indicates that the process is in an interruptible sleep state.
21:10 is the time the grep process started.
0:00 is the CPU time the grep process has used.
grep --color=auto shop.thienhang.com is the grep command that's currently running and filtering the ps output for "shop.thienhang.com".
The command ps -eo comm,pcpu --sort -pcpu | head -5 displays the top 5 processes by CPU usage, showing the command name and the percentage of CPU usage for each process (Support, 2020).
Fig 2. This command lists the top 5 processes by CPU usage, showing the command name (COMMAND) and the percentage of CPU usage (%CPU) for each process.
> ps -eo comm,pcpu --sort -pcpu | head -5
COMMAND %CPU
qemu-system-x86 60.3
chrome 9.7
chrome 8.2
code 5.7
The command ps aux --sort -pmem | head -5 lists the top 5 processes by memory usage (percentage of memory usage), sorted in descending order based on memory usage.
The command ps -ef | egrep "shop.thienhang.com|nginx" uses ps with egrep to filter the output for processes containing either "shop.thienhang.com" or "nginx" in their command or process name. This can be useful to find processes related to Nginx server or any process with "shop.thienhang.com" in its name (Henry-Stocker, 2008).
Fig 3. This command filter the output for processes containing either "nginx" or "shop.thienhang.com" in their command or process name.
> ps -ef | egrep "shop.thienhang.com|nginx"
root 1318 1 0 Sep16 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 1319 1318 0 Sep16 ? 00:00:00 nginx: worker process
www-data 1320 1318 0 Sep16 ? 00:00:00 nginx: worker process
www-data 1321 1318 0 Sep16 ? 00:00:00 nginx: worker process
…
tian 1236069 994090 0 22:00 pts/2 00:00:00 grep -E --color=auto shop.thienhang.com|nginx
UID: The user ID of the process owner.
PID: The process ID.
PPID: The parent process ID.
C: The processor utilization.
STIME: The start time of the process.
TTY: The controlling terminal of the process.
TIME: The cumulative CPU time used by the process.
CMD: The command or process name.
Write a script to determine whether a given file system or mount point is mounted and output the amount of free space on the file system if it is mounted. If the file system is not mounted, the script should output an error.
#!/bin/bash
echo "thienhang - Print mount point"
echo "--------------------"
echo "✅ Mount Points:"
mount -v | grep "^/" | awk '{print $3}'
echo
echo "❌ Unmount Points:"
awk '{print $2}' /etc/fstab | grep "^/" | while read -r line; do
grep -qs "$line" /proc/mounts || echo "$line"
done
echo ""
check_mount_and_space() {
local mount_point="$1"
# Check if the given mount point is mounted
if grep -qs "$mount_point" /proc/mounts; then
echo "Mount point '$mount_point' is mounted."
echo "Free space on '$mount_point':"
df -h --output=avail "$mount_point" | tail -n 1
else
echo "Error: Mount point '$mount_point' is not mounted."
fi
}
# Usage: ./check_mount_and_space.sh <Mount Point>
if [ -z "$1" ]; then
echo "Usage: $0 <Mount Point>"
exit 1
fi
check_mount_and_space "$1"
Fig 4. A given file system is mounted and output the amount of free space (GeeksforGeeks, 2019)
Fig 5. A given file system is not mounted
Reflect on your first week, in this class, at the graduate level. What went well? What aspects of the unit were challenging for you?
I found the introductory lectures and readings very engaging. The way the professor presented the course overview and objectives helped me grasp the direction of the class.
Collaborative group discussions during our first in-person class were enlightening. It was inspiring to hear different perspectives from fellow students, and it sparked a sense of intellectual curiosity. The class structure and organization were clear, and I appreciate how accessible the course materials are. The learning management system made it easy to navigate through lecture notes, assignments, and supplementary resources.
Adapting to the higher academic expectations at the graduate level was initially challenging. The depth and complexity of the course content demand a more thorough understanding and critical analysis. Time management proved to be a struggle during the first week. Balancing the demands of this class with other commitments required a reevaluation of my daily schedule and study habits. The rapid pace of the course and the volume of information to absorb were overwhelming at times. I realized the importance of breaking down the material into manageable segments and seeking clarification when needed.
In reflecting on this first week, I recognize the need to refine my time management skills and develop effective strategies to handle the academic rigor expected at the graduate level. I plan to leverage the available resources and seek guidance from professors and peers to overcome the challenges I've identified. Moving forward, I aim to maintain a proactive approach to my studies, seeking a balance between depth of understanding and efficient time allocation.
References:
Akhtar, A. N., Buchholtz, J., Ryan, M., & Setty, K. (2012). Database backup and recovery best practices. ISACA Journal, 1, 1-6. https://www.isaca.org/resources/isaca-journal/past-issues/2012/database-backup-and-recovery-best-practices
GeeksforGeeks. (2019). mount command in Linux with Examples. GeeksforGeeks. https://www.geeksforgeeks.org/mount-command-in-linux-with-examples/
Henry-Stocker, S. (2020, November 13). How to sort ps output. Network World. https://www.networkworld.com/article/3596800/how-to-sort-ps-output.html
Henry-Stocker, S. (2008, August 7). Long listings for the ps command. Network World. https://www.networkworld.com/article/2778219/long-listings-for-the-ps-command.html