Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drone Navigation detection using Adavanced Reinforcement Learning technique #927

Open
Panchadip-128 opened this issue Oct 19, 2024 · 2 comments
Assignees
Labels

Comments

@Panchadip-128
Copy link

Deep Learning Simplified Repository (Proposing new issue)

🔴 Create a Drone Navigation Detection System using Reinforcement Learning :

🔴 To create a environment using RL to detect the navigation pathway for environment and properly maintaining it to ensure successful navigation .Drone Navigator is an advanced software solution designed to empower autonomous drones with the capability to navigate complex environments efficiently and safely using Reinforcement Learning (RL) techniques.

🔴 Created within the ipynb file with random noise to best fit situations unknown to mdel completely :

🔴 Approach : Problem Setup: The drone navigation problem is framed as a Markov Decision Process (MDP) where the drone is the agent, the environment represents the 3D space with obstacles, and the goal is to navigate to the destination efficiently and safely.

State Representation: The drone's state is represented by its position, velocity, and sensor inputs for detecting obstacles.

Action Space: The action space consists of possible drone movements (e.g., moving up, down, forward, backward, left, right) and adjustments in speed or direction.

Reward Function: The reward is designed to encourage the drone to move closer to the target and penalize collisions with obstacles or inefficient paths. Positive rewards are given for reaching the goal.

Reinforcement Learning Algorithm:

Q-Learning or Deep Q-Networks (DQN) is used to train the drone to learn the optimal policy for navigation.
The model learns from interactions with the environment by trial and error, updating its policy to maximize cumulative rewards.
Obstacle Avoidance: The RL model is trained to detect and avoid obstacles using inputs from sensors or simulated environment data, ensuring safe navigation.

Training and Evaluation: The model is trained in simulated environments with varying levels of complexity, followed by evaluation using metrics like efficiency (path length) and safety (number of collisions).


📍 Follow the Guidelines to Contribute in the Project :

  • You need to create a separate folder named as the Project Title.
  • Inside that folder, there will be four main components.
    • Images - To store the required images.
    • Dataset - To store the dataset or, information/source about the dataset.
    • Model - To store the machine learning model you've created using the dataset.
    • requirements.txt - This file will contain the required packages/libraries to run the project in other machines.
  • Inside the Model folder, the README.md file must be filled up properly, with proper visualizations and conclusions.

🔴🟡 Points to Note :

  • The issues will be assigned on a first come first serve basis, 1 Issue == 1 PR.
  • "Issue Title" and "PR Title should be the same. Include issue number along with it.
  • Follow Contributing Guidelines & Code of Conduct before start Contributing.

To be Mentioned while taking the issue :

  • Full name : Panchadip Bhattacharjee
  • GitHub Profile Link : https://github.com/Panchadip-128
  • Email ID :panchadip128@gmail.com
  • Participant ID (if applicable):
  • Approach for this Project :Approach for Drone Navigation using Reinforcement Learning:
    Problem Setup: The drone navigation problem is framed as a Markov Decision Process (MDP) where the drone is the agent, the environment represents the 3D space with obstacles, and the goal is to navigate to the destination efficiently and safely.

State Representation: The drone's state is represented by its position, velocity, and sensor inputs for detecting obstacles.

Action Space: The action space consists of possible drone movements (e.g., moving up, down, forward, backward, left, right) and adjustments in speed or direction.

Reward Function: The reward is designed to encourage the drone to move closer to the target and penalize collisions with obstacles or inefficient paths. Positive rewards are given for reaching the goal.

Reinforcement Learning Algorithm:

Q-Learning or Deep Q-Networks (DQN) is used to train the drone to learn the optimal policy for navigation.
The model learns from interactions with the environment by trial and error, updating its policy to maximize cumulative rewards.
Obstacle Avoidance: The RL model is trained to detect and avoid obstacles using inputs from sensors or simulated environment data, ensuring safe navigation.

Training and Evaluation: The model is trained in simulated environments with varying levels of complexity, followed by evaluation using metrics like efficiency (path length) and safety (number of collisions).

  • What is your participant role? (Mention the Open Source program)

Happy Contributing 🚀

All the best. Enjoy your open source journey ahead. 😎

Copy link

Thank you for creating this issue! We'll look into it as soon as possible. Your contributions are highly appreciated! 😊

@abhisheks008
Copy link
Owner

Interesting one. Assigning this issue to you @Panchadip-128

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants