Fluoroscopy is a technique for obtaining "live" X-ray images of a living patient - it is like an X-ray TV camera. The Radiologist uses a switch to control an X-Ray beam that is transmitted through the patient. The X-rays then strike a fluorescent plate that is coupled to an "image intensifier" that is (in turn) coupled to a television camera. The Radiologist can then watch the images "live" on a TV monitor.
Flouroscopy is also used during many diagnostic and therapeutic Radiologic procedures, to observe the action of instruments being used either to diagnose or to treat the patient.
Below is a video clip of an actual Fluoroscopic procedure. Showing how we swallow and the deposit made in the stomach.
Sunday, April 12, 2009
History of Fluoroscopy
The beginning of fluoroscopy can be traced back to November 8th 1895 when Wilhelm Röntgen noticed a barium platinocyanide screen fluorescing as a result of being exposed to what he would later call x-rays. Within months of this discovery, the first fluoroscopes were created. Early fluoroscopes were simply cardboard funnels, open at narrow end for the eyes of the observer, while the wide end was closed with a thin cardboard piece that had been coated on the inside with a layer of fluorescent metal salt. The fluoroscopic image obtained in this way is rather faint.
Due to the limited light produced from the fluorescent screens, early radiologists were required to sit in a darkened room, in which the procedure was to be performed, accustomizing their eyes to the dark and thereby increasing their sensitivity to the light. The placement of the radiologist behind the screen resulted in significant radiation doses to the radiologist. Red adaptation goggles were developed by Wilhelm Trendelenburg in 1916 to address the problem of dark adaptation of the eyes, previously studied by Antoine Beclere. The resulting red light from the goggles' filtration correctly sensitized the physician's eyes prior to the procedure while still allowing him to receive enough light to function normally.
The development of the X-ray image intensifier and the television camera in the 50's revolutionized fluoroscopy. The red adaptation goggles became obsolete as image intensifiers allowed the light produced by the fluorescent screen to be amplified, allowing it to be seen even in a lighted room. The addition of the camera enabled viewing of the image on a monitor, allowing a radiologist to view the images in a separate room away from the risk of radiation exposure.
More modern improvements in screen phosphors, image intensifiers and even flat panel detectors have allowed for increased image quality while minimizing the radiation dose to the patient. Modern fluoroscopes use CsI screens and produce noise-limited images, ensuring that the minimal radiation dose results while still obtaining images of acceptable quality.
The invention of X-ray image intensifiers in the 1950s allowed the image on the screen to be visible under normal lighting conditions, as well as providing the option of recording the images with a conventional camera. Subsequent improvements included the coupling of, at first, video cameras and, later, CCD cameras to permit recording of moving images and electronic storage of still images.
Modern image intensifiers no longer use a separate fluorescent screen. Instead, a caesium iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical general purpose system, the output image is approximately 105 times brighter than the input image. This brightness gain comprises a flux gain (amplification of photon number) and minification gain (concentration of photons from a large input screen onto a small output screen) each of approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-ray photons, is a significant factor limiting image quality.
Below I am posting Professor McLeod's presentation on Fluoroscopy, I feel it brings all the above together
Due to the limited light produced from the fluorescent screens, early radiologists were required to sit in a darkened room, in which the procedure was to be performed, accustomizing their eyes to the dark and thereby increasing their sensitivity to the light. The placement of the radiologist behind the screen resulted in significant radiation doses to the radiologist. Red adaptation goggles were developed by Wilhelm Trendelenburg in 1916 to address the problem of dark adaptation of the eyes, previously studied by Antoine Beclere. The resulting red light from the goggles' filtration correctly sensitized the physician's eyes prior to the procedure while still allowing him to receive enough light to function normally.
The development of the X-ray image intensifier and the television camera in the 50's revolutionized fluoroscopy. The red adaptation goggles became obsolete as image intensifiers allowed the light produced by the fluorescent screen to be amplified, allowing it to be seen even in a lighted room. The addition of the camera enabled viewing of the image on a monitor, allowing a radiologist to view the images in a separate room away from the risk of radiation exposure.
More modern improvements in screen phosphors, image intensifiers and even flat panel detectors have allowed for increased image quality while minimizing the radiation dose to the patient. Modern fluoroscopes use CsI screens and produce noise-limited images, ensuring that the minimal radiation dose results while still obtaining images of acceptable quality.
The invention of X-ray image intensifiers in the 1950s allowed the image on the screen to be visible under normal lighting conditions, as well as providing the option of recording the images with a conventional camera. Subsequent improvements included the coupling of, at first, video cameras and, later, CCD cameras to permit recording of moving images and electronic storage of still images.
Modern image intensifiers no longer use a separate fluorescent screen. Instead, a caesium iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical general purpose system, the output image is approximately 105 times brighter than the input image. This brightness gain comprises a flux gain (amplification of photon number) and minification gain (concentration of photons from a large input screen onto a small output screen) each of approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-ray photons, is a significant factor limiting image quality.
Below I am posting Professor McLeod's presentation on Fluoroscopy, I feel it brings all the above together
Sunday, March 22, 2009
Monday, March 16, 2009
What is Linear Tomography ?
Tomography is imaging by section or sectioning. The word was derived from the Greek word "tomos", which means "a section", "a slice", or "a cutting" and "graphien", which means to write or document.
Linear Tomography is a radiographic technique that uses motion to demonstrate anatomy lying in a plane of tissue, while blurring or eliminating structures above and below the plane of interest. This technology led to the development of CT (Computerized Tomography), which is more widely used today.
Linear Tomography is a radiographic technique that uses motion to demonstrate anatomy lying in a plane of tissue, while blurring or eliminating structures above and below the plane of interest. This technology led to the development of CT (Computerized Tomography), which is more widely used today.
Purpose for Linear Tomography
In Tomography, radiologic staff make a sectional image through a body by moving an x-ray source and the film in opposite directions during the exposure. As a result, structures in the focal plane appear sharper, while structures in other planes appear blurred. By adjusting the direction and range of the movement, operators can select different focal planes which contain the structures of interest. This technique began in the 1930s by the radiologist Alessandro Vallebona, it proved useful in reducing the problem of superimposition of structures in projectional (shadow) radiography. By modifying the direction and extent of the movement, operators can select different focal planes which contain the structures of interest.
In figure A to the left above you see a PA chest x-ray, with the arrow pointing to an inflammatory lesion.
In figure B to the right above you see a left lung apex tomography, where the inflammatory lesion is seen unequivocally, along with its size as well as any caviations.
Principles of Linear Tomography
The Tomographic Principle is based on the synchronous movement of two of the three elements in a tomographic system : the tube, the object, and the image receptor.
The tube and the image receptor move during the exposure in opposite directions around a stationary fulcrum, the pivot point.
The tube and the image receptor are attached by a rod as described in the equipment section of this blog.
The objecct to be imaged is placed at the level of the fulcrum or pivot point. The area placed at the pivot point will not be blurred, because it has not moved in relationship to the tube and image receptor. Thus tomography enhances visualization of superimposed structures.
Equipment used in Linear Tomography
Applications for Linear Tomography
Tomography is commonly used when improved radiographic contrast is essential in your diagnostic exam. Through blurring of overlying and underlying tissues, the subject contrast of tissue in the tomographic section is enhanced.
Anatomy at the target level remains sharp, while structures at different levels are blurred. By varying the extent and path of motion, a variety of effects can be obtained.
Although largely obsolete, conventional tomography is still used in specific situations such as dental imaging (orthopantomography) or in intravenous urography.
Although largely obsolete, conventional tomography is still used in specific situations such as dental imaging (orthopantomography) or in intravenous urography.
Wednesday, January 28, 2009
Subscribe to:
Posts (Atom)