Do you use 2D or 3D information for search?
While marking a 2D ROI, it is automatically expanded to a 3D ROI, and all the 3D information is taken into account to identify visually-similar patterns.
Is this a web-based solution?
Our solution can be implemented as a web application. However, in order to comply with GDPR and data security regulations, our software is currently hosted within the hospital’s network.
Is your product certified?
contextflow SEARCH v1.0 is CE marked. We are currently working on the FDA approval process. In addition, our company received its ISO 13485:2016 certification via TüV SüD, meaning we have established a quality management system which meets the requirements of the ISO 13485:2016 standards.
What do the colors represent?
Green: No pathological patterns have been detected.
Orange: Borderline cases with potential pathological patterns that must be reviewed by a doctor.
Red: We have detected pathological patterns.
Does the size of the pattern or number of patterns affect which category a case is placed into?
No, the size of the pattern and the number of patterns detected do NOT affect whether a case is placed into the red category or not. For example, a case with only one very small region of one disease pattern is still placed in the red category.
Do you use 2D or 3D information for search?
What kind of algorithm is contextflow built on?
We develop and train our own (Deep) Convolutional Neural Networks, which are designed in a way to create “maps” (embeddings) where semantically-similar patterns are very close together and different patterns are not. Thus, we can then identify visually-similar cases by looking at which patterns of existing cases are very close to the current selected one. For those similar cases, we can look at all the clinical and relevant information connected to them.
Can you retrain your algorithm solely on our data?
We don’t plan on retraining on local data at the moment. We simply enable a search of your local data.
How do you ensure the quality of your methods?
We have a streamlined data curation and annotation process where, together with our radiology experts, we validate findings in the image. Annotations are created to an extent where we can benchmark the quality of our methods. We use a mixture of unsupervised, semi-supervised and supervised approaches, minimizing the amount of annotated data necessary to provide high quality image retrieval performance.
How do you ensure your system doesn’t have bias?
Excellent question! 1) It’s true, you might not have all the data in the dataset…you might work with populations where one type of disease is not overly prevalent. For this, you need to be sure that the database covers most or all of the relevant patterns. Coal-mining example: the difference between lung diseases in the US and Europe is probably not that much but there are examples where, say, breast tissue in European females is more dense than in asian women and there you would need different data sets. So there’s no perfect answer to this except to take the direction from the doctors to know in which instances we need different datasets. 2) It could be that the dataset is biased based on how people are reporting. We do not blindly take the data from the hospital but we also have a data curation process, so there’s an in between step that we do. 3) Let’s say there’s a new disease description, so the reference content will be updated. We still have the outdated database but we link to the reference database, which is always state of the art. 4) We continually validate the algorithms to avoid overfitting the model.
Does your technology replace radiologists?
No, our technology is designed to support radiologists during the diagnostic process and does not perform automated diagnosis. We simply help radiologists with difficult cases and help them to tackle their high workloads. The radiologist is still in the driver’s seat.
What kind of data does the knowledge base contain?
The knowledge base consists of more than 8000 lung CT scans and corresponding report information acquired during clinical routine over a time-span of approximately 3 years. During data collection, inclusion criteria were limited to technical quality criteria only to obtain an unbiased sample of the routine population.
Where does the data in the knowledge base come from?
The anonymized data in the knowledge base is provided by contextflow’s partner hospitals. We have a great network thanks to team members who have been in this domain for a long time. Through those personal contacts, we earn trust and develop win-win partnerships. In addition, we go to the leading radiology conferences in the world and make contacts with international partners there.
Do you pay for data?
The clinical value derived from our AI tools incentivizes our partner hospitals to share their underutilized data. It’s a win-win situation, and thus we do not pay for data.
How do you ensure data privacy for cases part of your knowledge base?
contextflow’s knowledge base contains fully anonymized datasets only which cannot be traced back to individual patients.
In the demo I saw that patient information (age, sex) was available. On the other hand the statement is that no patient data is transferred. Where was this displayed information coming from? What information is ‘encoded’ in the features?
Both are correct. We process local cases locally on a Virtual Machine. Thus we can show you patient information for the selected patient in our browser – because it is running locally. For the anonymized data in our knowledge base, we also have additional information (e.g. age, gender, reports, …) that we can of course also show.
The features encode abstract information extracted by our algorithm (e.g. neural nets) calculated locally based on image information.
What is the link between PACS and contextflow local server? Does the contextflow local server have full access to the images in the PACS?
Our local server has a DICOM receiver that accepts DICOMs that are forwarded to us. The PACS plugin creates an URL that points to the contextflow Virtual Machine and also includes the DICOM Instance UID of the case (in order for us to open the correct case that was previously sent to us from the PACS).
To what extent does contextflow have access to the local server? How is support given?
Through secured VPN and ssh, we have access to the local server for maintenance and support. For the user we have our support helpdesk to submit requests, ideas and bugs.
Is it possible to select a snippet of a case that also includes patient information in the corner? If yes, what is the impact on privacy of this information?
That has no impact, as the information used by the algorithm is the image information only. The text information is just overlaid locally in the user interface to have all information available at one glance.
How is communication between local and website protected? Is encryption applied?
The website can be configured to use https only. We need to cooperate to setup valid certificates since the host runs in your local network. As a fallback we can deploy self-signed certs which you have to whitelist in the workstations.
Are URL’s one time / hashed or are they recognisable (example: contexflow.com/?company=LUMC&caseID=1234)
The URL is recognizable such that it includes the local hostname or IP address and the InstanceUID – but only works locally and its access is user/password protected.
My conclusion is that the contextflow local server stores personal information (DICOMs). Could you confirm my conclusion?
Yes, in order to be able to search within a scan, it needs to be available on the local server, so we store the DICOM data that was sent to the local server.
How long are DICOMs stored on the local server?
It’s kept as long as there is storage space available, thereafter we begin to delete the oldest data. Alternatively, you can define the retention period.
My assumption was that the local server also searches similar studies in our PACS. Based on your answer this does not seem the case. Could you confirm or deny my assumption?
Being able to search in the local PACS system is something that we want to provide as a new feature based on your input/requests in the near future. In the current version, this is not yet available.
When was contextflow founded?
We were founded in 2016.
Who founded contextflow?
contextflow was founded by 4 AI in medical imaging experts: Markus Holzer (CEO), René Donner (CTO), Georg Langs (Chief Scientist) and Allan Hanbury (Professor of Data Intelligence). More information about our team can be found on our Team page.
What’s your origin story?
contextflow is a spin-off of the Medical University of Vienna (MUW), Technical University of Vienna (TU) and European research project KHRESMOI, which developed a multilingual, multimodal search and access system for biomedical information and documents. Upon successful completion of the project, our co-founders decided to continue the work, and thus contextflow was born.
How many people work at contextflow?
Fortunately, we’re growing! At last count, we are a team of 16.
What makes your team so special?
Our bond is our secret sauce, and we strengthen it regularly, be it cooking together, playing poker or our bi-weekly happy hours. We’re a diverse group from over 9 countries with a passion for changing the future of radiology and improving patient care. And we check our egos at the door, cultivating a collaborative environment.
I want to work for contextflow. What should I do?
We’re always looking for passionate individuals to join the contextflow family. Check out our Careers page for the most up-to-date openings. Don’t see what you’re looking for? Send your CV and a cover letter to firstname.lastname@example.org letting us know how you wish to contribute.
Feel free to contact us!