This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
So let me try to explain this in the best way I can--
We all hear differently from one another, from what I can gather from reading up on the contents in this sub. So what if we were able to create a 3D structure of our individual ears, possibly through the use of Ultrasound or other means, and simulate the way sound waves should travel to our eardrum with that data? Something equivalent to MeshtoHRTF, but even more than just the outer ear.
AI would come into play by taking several models from people who are partaking in the study, and that data would be fed into an algorithm that would train the neural network to understand the difference that certain differences in the ar structure equate to which frequency. Ultimately leading to the end user being able to get their ears scanned at an audiologist, take that information home with them, plug it into an algorithm, select their model of headphone from a database of headphone measurements, and then personalize that preset to match exactly what your ears should be hearing, thus making the most natural sounding headphone for your specific use-case.
Is this possible, or am I just thinking sci-fi here? I understand there would still be discrepencies, since unit and driver variation is a thing, but I figure this could get us pretty close to perfect sound? What do you think?
Subreddit
Post Details
- Posted
- 2 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/oratory1990...