This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
My employer makes significant use of robotic weld cells, and while working with the equipment I've noticed what seems to be room for improvement in the programming. This is purely a personal academic project, as I am quite curious on if machine learning could produce comparable or superior results to the human-made programming used at work. However, as such there will unfortunately be areas of vagueness because I need to stick to knowledge that is publicly available regarding their operations. I'm going to have to stick to more generic, publicly available reference material, and will not be able to share most, if any, of the end result.
I would like to run simulations in a 3D environment, using machine learning to train a computer program to find the most efficient sequence of movements & operations. There are several constraints that the program would have to abide by, and breaking any single one of them would be a failure condition for a given simulation. The simulations would also need to have scripted actions, which would represent the human component of loading & operating the cells, which the program recognises as points where it has to start/stop operating the robotics.
Here are some examples of similarly capable systems:
- https://www.youtube.com/watch?v=QzcBXgUuCMU
- https://www.youtube.com/watch?v=21hTsTvx_iI
- https://www.youtube.com/watch?v=0L7Xk5_s3QQ
Examples of constraints:
- Minimum and maximum values for the rotation of each joint in an arm.
- Maximum speed for the rotation of each joint in an arm.
- Maximum speed that can be experienced by a part being transported by a "grabber" attachment on an arm.
- No movement or actions by an arm until the representation of the human operator (this is where scripted actions come in) starts the cell.
- Light curtains between the arm and the operator may not be broken when running. For any unfamiliar, this is basically an invisible sensor barrier used in industrial environments, where connected machinery enters an emergency stop if anything physically interrupts the barrier.
- No part of an arm may make contact with any part of the environment, or another arm, except for the welding tips & "grabber" on the tool head making contact with the parts being worked on.
- No part of an arm may cross a light curtain into an adjacent cell unless it is for the purposes of delivering or retrieving a part from said cell.
- Parts must be removed from & placed in their jigs in a manner that does not cause the part to collide with the locating pins or other fixtures.
The goals are much simpler: press the weld tips on the tool head together at pre-defined places on the parts being worked on, and then move the parts to the next cell. The arms in the simulation would need to figure out how and where to position themselves, what order to tackle welds in, and how to move themselves and parts without colliding with anything.
I intend to start simple, by creating an environment with obstacles that the arm would have to move around in order to touch certain points. This is intended as a test of the ability for the machine learning to effectively control a multi-joint arm, and for me to figure out the best way for the algorithm to control the arm. After that, I'll slowly increase the complexity after successful simulations (and make necessary changes when it can't) until it reaches a point where it's able to replicate real-world robotics.
I would greatly appreciate advice on what software/libraries/engines/resources to refer to for this project. Realistic visuals is not at all important, but complex shapes are a must.
Subreddit
Post Details
- Posted
- 2 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/reinforceme...