This repository contains the implementation of a task offloading system designed for IoT devices, leveraging fog and cloud nodes. The system dynamically selects the best node for task offloading using a Weighted Formula Method. The goal is to improve performance and reduce latency with the help of Redis caching. The entire project is containerized using Docker for easy deployment and management.
- IoT Device Layer: IoT devices generate tasks and submit them to the manager.
- Manager: The manager receives tasks from IoT devices and evaluates the status of all fog nodes (checking CPU usage, memory usage, task queue length, etc.). Based on the Weighted Formula Method, it selects the most suitable fog node for task offloading.
- Fog Nodes: Fog nodes process tasks based on their resource availability. If selected, the node handles the task processing; otherwise, it communicates with the manager for further decision-making. Redis caching is utilized to store frequently requested tasks.
- Cloud Node: If none of the fog nodes are available, tasks are forwarded to the cloud node for processing.
- Redis Cache: Frequently accessed tasks are cached using Redis in the in-memory RAM, reducing the processing time and server load for repeated tasks.
- Task Generation: IoT devices generate tasks with varying characteristics, such as size and priority. The task generation follows a Poisson distribution to simulate real-world randomness.
- Task Submission: Tasks are sent to the manager, which checks all fog nodes for resource availability.
- Fog Node Selection Using Weighted Formula Method:
- The manager evaluates each fog node based on the following parameters:
- CPU Usage
- Memory Usage
- Task Queue Length
- Network Delay
- Energy Consumption
- Using a weighted formula, the node with the best score is selected for task offloading. If no suitable fog node is found, the task is forwarded to the cloud.
- The manager evaluates each fog node based on the following parameters:
- Task Processing: The selected fog or cloud node processes the task using its available resources. Redis caches frequently accessed tasks to speed up future processing.
Redis is integrated into the system to store frequently requested tasks. If a task is found in the Redis cache, it is fetched directly, avoiding the need for reprocessing. This reduces latency and server load, significantly improving the overall system performance.
We have extensively tested the overall performance of each container in the system (IoT devices, fog nodes, cloud node, and Redis) to monitor:
- CPU usage
- Memory usage
- Disk I/O
- Network usage
We used various tools to monitor container metrics such as Docker stats
, htop
, and psutil
to gather CPU, memory, and I/O metrics.
Task_offloading/
│
├── iot_device/
│ ├── device.py
│ ├── Dockerfile
│ ├── requirements.txt
│
├── fog_nodes/
│ ├── fog_node1.py
│ ├── fog_node2.py
│ ├── fog_node3.py
│ ├── Dockerfile
│ ├── requirements.txt
│
├── manager/
│ ├── manager.py # Centralized manager for task distribution
│ ├── Dockerfile
│ ├── requirements.txt
│
├── redis/ # Redis cache setup
│ ├── Dockerfile
│ ├── requirements.txt
│
├── docker-compose.yml # For orchestration
├── README.md
- Docker
- Docker Compose
- Python 3.8 or higher
-
Clone the Repository:
git clone https://github.com/Talib8335/Task-Offloading.git cd Task-Offloading install dependencies pip install -r requirements.txt
-
Build Docker Images:
docker compose build
-
Start the Services:
docker compose up -d
-
Access the Containers:
#IoT device container:
docker exec -it iot_device /bin/bash
# Fog Node 1:
docker exec -it fog_node_1 /bin/bash #similarly for fog_node2 & fog_node3
# Cloud Node:
docker exec -it cloud_node /bin/bash
-
View logs of a specific container:
sudo docker compose logs -f <container_name> #example - fog_node1
-
Stop the Services:
docker compose down
Below are snapshots showing the performance metrics of the system:
These snapshots represent the combined performance metrics across all containers.
Below is a terminal view showing the task processing across all containers.
In the current implementation, we have used a homogeneous system, where all fog nodes have identical resource configurations (CPU, memory, etc.). This setup provided a controlled environment to test our task offloading algorithms and caching mechanisms.
In future updates, we plan to move towards a heterogeneous system where fog nodes will have different configurations (e.g., varying CPU cores, memory, and network speeds). This will allow for more dynamic and intelligent task offloading decisions, better reflecting real-world IoT-Fog-Cloud environments. We will also aim to handle more CPU and memory-intensive processing as part of the system’s evolution.