Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware Dependencies #1

Open
oroszgabor83HU opened this issue Sep 28, 2021 · 4 comments
Open

Hardware Dependencies #1

oroszgabor83HU opened this issue Sep 28, 2021 · 4 comments

Comments

@oroszgabor83HU
Copy link

Hi Mark,
please help me with this issue.
If I have a - let's say - "non-PLUS compatible" US machine (not listed in the PLUS manual sources) how can I technically build up a training system similar to yours?
Is it any restriction (e.g. brands that cannot be used)?
Where can I find some source for this?

Thanks in advance,
Gabor

@ungi
Copy link
Member

ungi commented Sep 28, 2021

If your ultrasound machine has any standard video output (e.g. for external monitor), then you could use a frame grabber to feed the ultrasound video to PLUS. Currently, Epiphan and Imaging Controls frame grabbers are supported by PLUS.
Note that this is a bit outdated project. We now try to move towards using video camera (webcam) only for computerized evaluation of simulated medical procedures. Position trackers are always hard to mount on tools, especially lightweight things like needles. But AI video processing can recognize all tools e.g. for central line insertion. We can provide better feedback now from a single camera than what we could do earlier with trackers. Accurate aiming with the needle is of course important, but there are indirect signs on an external video feed that can verify accurate needle placement, e.g. colored fluid in the syringe.

@oroszgabor83HU
Copy link
Author

Hi Tamas,
thanks for the fast answer!
Sounds very interesting the video tracking/processing with AI.
Do You have any public, available source for this project?
I am an intensivist guy from Hungary and would be interested in this issue in the aspect of other invasive procedure (e.g. percutaneous tracheostomy).

Thanks,
Gabor

@ungi
Copy link
Member

ungi commented Oct 5, 2021

Our source code is in this repository: https://github.com/SlicerIGT/aigt
Under DeepLearnLive. It is in an early experimental phase, not a solid foundation to build on yet. I would just start collecting videos of training procedures. If the quality is good for a human rater to evaluate, then sooner or later it will be good enough for AI as well.

@oroszgabor83HU
Copy link
Author

Thank You very much!
G

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants