By Konstantinos Michos
Abstract: Recent advances in AI technology have enabled an unprecedented level of control over the processing of digital images. This breakthrough has sparked discussions about many potential issues, such as fake news, propaganda, the intellectual property of images, the protection of personal data, and possible threats to human creativity. Susan Sontag (2005 [1977]) recognized the strong causal relationship involved in the creation of photographs, upon which scientific images, rely to carry data (cf. Cromey 2012). First, this essay is going to present a brief overview of the AI image generative techniques and their status within the rest of computational methodologies employed in scientific imaging. Then it will outline their implementation in two specific examples: The Black Hole image (cf. Event Horizon Telescope Collaboration 2019a-f) and medical imagery (cf., e.g., Oren et al. 2020). Finally, conclusions will be drawn regarding the epistemic validity of AI images. Considering the exponential growth of available experimental data, scientists are expected to resort to AI methods to process it quickly. An overreliance on AI lacking proper ethics will not only result in academic fraud (cf. Gu et al. 2022; Wang et al. 2022) but will also expose an uninitiated public to images where a lack of sufficient explanation can shape distorted opinions about science.
Introduction
The amount of data produced every day is growing at an extraordinary rate. The advent of the internet, social media, increased computing power in mobile formats, and an ever-increasing amount of available storage (local or cloud) are among the main reasons for this explosion. And while everyone realizes that the sheer size of produced datasets is such that any meaningful processing cannot possibly be done by human endeavor alone, the task is often casually assigned to ‘artificial intelligence’ (AI), an all-inclusive term for most computational and algorithmic operations in everyday language. While there is an ongoing discussion about what exactly constitutes ‘intelligence’ in a technological context, we can hardly argue that we are even close to the development of what would be deemed as a generalized, broad form of AI. Tracing back to the emergence of the idea of AI in the 1960s, machine intelligence was first defined in terms of its ability to simulate human behavior (cf. McCarthy et al. 2006: 12-14). Without any intention of providing an exhaustive analysis, the term ‘AI’ will be used in this paper for any set of computational techniques that allow certain tasks to be completed requiring less than pure arithmetic operations and more of decision making on the AI’s part. Image generation falls into this category: the AI has to ‘decide’ which pixels should be included in the final image and which not.
The problem of big data processing was encountered in scientific research long before algorithms were employed for delivering advertisements based on consumer preferences. Experimental procedures in natural sciences produce a lot of data that need to be filtered for errors, characterized, grouped, and evaluated. For years, this lengthy procedure was done by researchers themselves seeking knowledge within data, i.e., meaning within information. As instruments kept advancing, more data was produced, allowing for finer measurements but also requiring more time and effort to work with. Computers helped, but human input and guidance were still crucial. The competitive advantage of AI (in the form of machine learning, deep learning, etc.) is the minimization of human intervention due to prior training. With the processing power currently available, almost no size of data is too big to handle. Thus, increasingly, AI offers the ability to work with datasets inaccessible before because of their size.
One such area is nanotechnology. The study of nanospecimens (just billionths of a meter in size) naturally generates large amounts of data even for tiny fragments of materials. Nanoscopic devices such as the Scanning Tunneling Microscope (STM) or the Atomic Force Microscope (AFM) trace surfaces and reveal even single molecules protruding from them. The magnification scale is such that the area of interest in which scientists have to look for trends and peculiarities is the equivalent of a whole geographical region compared to a comprehendible map of it – a challenging endeavor to attempt. The zooming abilities of the instruments provide some control, but they are not always available. Davis Baird and Ashley Shew (2004) point out cases where the STM lacked such a feature leading to visual artifacts being mistaken for actual data. Still, even with powerful magnification tools, an AI algorithm would make short work of these calculations, faster than any method relying on human input could.
But there is also another direction AI is taking in scientific research. At times, the study of phenomena is hampered not by the abundance of data but by the proper lack of it. Celestial objects at great distances emit light in such small quantities that telescopes can barely capture it. For a long time, astronomy relied on advances in optics and the production of larger lenses or parabolic mirrors. Nowadays, AI can be used to generate missing information through inference techniques, filling in the missing gaps in astronomy images, not unlike the editing done to security camera footage in crime investigations. Katherine Bouman et al. (2016), for instance, present a comparison between different visual enhancement algorithms, benchmarking their effectiveness in reproducing predetermined images.
While it is true that AI can be used to process any kind of scientific data, the focus of this essay is on images within empirical scientific research. Images here serve many different functions. Firstly, they engage viewers more efficiently. And while this becomes immediately clear for science communication, it is equally important in the research itself, where new knowledge needs to be accepted against what is already established. Klaus Sachs-Hombach (2016: 8) proclaims that “using pictures in a communicative context offers a powerful option because understanding pictures involves a particularly intense engagement of our perceptual system”, acknowledging that images can rival written or oral speech in communication. Maria Giulia Dondero and Jacques Fontanilles (2014: 6) note that scientific images feature an experimental function as well as a cognitive one. The benefits of using images in education are also undeniable: Charles Xie and Hee-Sun Lee found that “college students gained deeper understanding of abstruse quantum ideas from the use of simulations” (Xie/Lee 2012: 1017). Science popularization obviously depends heavily on the use of images to quickly communicate the underlying principles and results of research. Secondly, numerical data turned into visual forms is still data and can become the foundation for further research, a very trivial example being the calculation of the rate of change through the slope of a graph. A third reason for the importance of images in scientific research lies in their perceived close connection to reality. Analog photography allowed for capturing an impression of the chemicals of the film through a masterful design of engineering. Laws of physics and the restriction of human interaction to a single push of a button cemented a strong causal relationship with a real object. This is what Susan Sontag meant when she described the photograph as “incontrovertible truth” (Sontag 2005 [1977]). Of course, photographs are not immune to manipulation, nor is every experiment akin to taking a photo. Editing software has come a long way and allows an unprecedented level of control over digital assets. Modern scientific premises are so complex that they require lots of human input to produce meaningful data. The emergence of generative AI algorithms, such as DALL·E, Midjourney, or Stable Diffusion provides a new perspective of computational processing, although somewhat disturbing at times. Regardless, the common idea and practice remains the pursuit of visual evidence. And perhaps there are no better examples of this than nanotechnology and astronomy, whose objects of observation are either too small or too large to be seen with the naked eye. Before offering some insight into these two fields of application as well as in the promises and risks of employing AI image software there, I will present the two foundational kinds of reasoning in scientific discourse in order to evaluate their respective roles in handling pictorial data.
Two Kinds of Reasoning in Scientific Discourse
Scientific research develops mainly through two kinds of reasoning: deduction and induction.[1] Deduction is loosely described as the logical transition from a general set of arguments to a subset while induction follows the opposite direction. The advantages and pitfalls of each type of reasoning have been extensively discussed with first attempts dating back to antiquity (cf. Aristotle 1998); the consensus being that conclusions reached through deduction are generally considered more reliable than those reasoned via induction. The implication for scientific scenarios is that methods producing secondary data based on the primary data of actual measurements are highly inductive and thus more likely to be invalid.
Let us examine in more detail how these two types of reasoning unfold in the examples offered above. A nanotechnology imaging experiment examines the nanosized details of a specimen which still needs to be macroscopic for researchers to handle. As a result, large amounts of data are generated in an attempt to capture the whole of the surface morphology. In this case, the deterministic mode of operation of experimental instruments produces data unambiguously, in a deductive manner. In order to identify deformations, an AI algorithm can be used to locate them effectively, ensuring minimal required input and fast completion of the task. The AI calculates the positions of these deformations as output and also maps them in the form of an image. This image is metadata and is more closely related to the computational procedure (governed by the software code and its training, both anthropogenic) than the experimental procedure (governed by physical laws). This secondary data carries the misconceptions, presuppositions, and expectations of the developers of the code. It could be argued that research is subject to human error anyway, even without the use of AI. However, the training phase of AI makes its way of operation opaque to us. By excluding human intervention from the process, we lose access to the inner workings of the algorithm and only witness its final output. Since nanotechnology often finds practice in medical applications, researchers warn that caution is required:
[U]nless AI algorithms are trained to distinguish between benign abnormalities and clinically meaningful lesions, better imaging sensitivity might come at the cost of increased false positives, as well as perplexing scenarios whereby AI findings are not associated with outcomes. To facilitate the study of AI in medical image interpretation, it is paramount to assess the effects on clinically meaningful endpoints to improve applicability and allow effective deployment into clinical practice (Oren et al. 2020).
We will look at nanotechnology – and its overabundance of data – more closely below. On the other end of the spectrum, astronomical observations suffer from a lack of data. Light from the stars can carry valuable information about, for instance, their chemical composition. As stars are light years away from us, light emitted is so scarce and of low intensity that we often settle for whatever light is available. At times, astronomical images aim solely to capture the aesthetic beauty of the sky – in such cases their epistemic validity is quite irrelevant. However, in cases where these images are employed for a claim of proof, the way they were produced is critical. With little information at their disposal, astronomers rely on AI extrapolation techniques to fill in missing parts needed to produce a full, final image, in other words by induction.The resulting secondary pixels are not connected to any real referent but provide only a sense of ‘wholeness’ to the picture. In both cases, the AI methods employed can seriously harm the epistemological status of produced images and raise doubts about the standing of all findings that accompany them. To illustrate the concerns expressed here, two specific examples will be analyzed in more detail: bacteria mappings with dimensions in the nanoscale and the image of the M87* black hole.
Astronomy: Too Little Data
First let us clarify that the image of the M87* black hole, which was published in 2019 by the Event Horizon Telescope Collaboration scientific team and is the first of its kind, is controversial at best. And rightfully so: Black holes are thought to be supermassive astronomical objects that have undergone gravitational collapse. Firstly, not even light traveling close to their vicinity (called the event horizon) can escape their gravitational field. Although they emit radiation, it lies well outside the boundaries of the visible spectrum, hence their name. With no visible light coming from them, black holes are by definition invisible. Secondly, black holes may not exist at all, at least not in the way we initially thought. Physicists such as Albert Einstein and Stephen Hawking predicted their existence through algebraic calculations – but since any direct observation is impossible, certain astronomical signals have been interpreted as black holes by the scientific community. In a talk given in 2013 at the Kavli Institute of Theoretical Physics, Santa Barbara, Hawking (2014) expressed his disbelief in the existence of the event horizon, renouncing his earlier claims that contributed to his fame.
Ignoring these peculiarities for a moment, let us focus on the creation of the image itself. The full procedure has been documented in a series of articles (The Event Horizon Telescope Collaboration 2019a-f: L1-L6). They explain that the resolution of a telescope is related to the size of the lens or mirror used to capture light.[2] For an object as far away as M87* (almost 54 million light years away from Earth), the size of a single telescope lens required to ‘properly observe’ it would almost be equal the size of the Earth. As manufacturing a disc of that size is impossible, eight smaller telescopes were employed around the globe and used together. As the Earth rotated, more observations would be collected, slowly contributing to a final image (cf. fig. 1).
Figure 1:
Left – Positions of the eight telescopes of the Event Horizon Collaboration.
Right – Tracks of the orbits of telescopes due to Earth’s rotation and their corresponding contribution to the black hole image
(The Event Horizon Telescope Collaboration 2019c/d, L3, L4)
It is evident that the primary data gathered to form the image accounted for a very small area compared to the theoretically needed size of a single lens. Apart from proper ‘stitching’, the rest of the image had to be created through algorithms. The researchers dedicated a lot of effort to studying and eliminating any possible sources of errors, but they still acknowledge that “images are sensitive to choices made in the imaging and self-calibration process” (The Event Horizon Telescope Collaboration 2019d, L4: 9). Before deciding on a final image that best meets their criteria, a series of many images was produced in search for suitable parameters (cf. fig. 2).
Figure 2:
A series of generated black hole images based on different parameters
(The Event Horizon Telescope Collaboration 2019d, L4)
Parameter customization within computational approaches are to be expected within experiment calibration but leave potential room for errors during this procedure. However, remarkably, in a focus issue of The Astrophysical Journal Letters (cf. Doeleman 2019) summarizing the extensive work (almost 250 pages in total), the EHT Collaboration boldly ond confidently claimed “We report the first image of a black hole” as well as describing this image as “the strongest case for the existence of supermassive black holes” (Doeleman 2019: n.pag.).
The technical details I have just explained here are meant as a backdrop on which to explain the concerns about the use of AI in scientific imaging. By the EHT Collaboration’s own admission, the image creation process was mostly influenced by human choice, albeit a thoroughly justified one. This choice initially involved the modeling as well the parameters used. And, indeed, this is what AI is capable of: solving complex mathematical problems based on the parameters we choose to program into it. This necessary human element makes the process completely different from the purely deterministic way images are produced by telescopes entirely dictated by optical physics, where a referent (star) is connected to a single final image in a one-to-one relationship. Contrary to that, AI can produce a series of images (as seen in figure 2) based on probability. Here, human scientists are again needed to choose, based on selected criteria. Proclaiming the validity of these images is a bold and risky step given that the inner workings of the AI tool itself are not free of room for error. One final comment: Katherine Bouman (a key member of the EHT Collaboration) mentions that, naturally, machine training was involved when describing the development of the algorithm (cf. Bouman et al. 2016). The training included images of other astronomical objects as well as everyday images in order to create a ‘content-agnostic’ algorithm. In other words, the dataset used to simulate an invisible object consisted solely of visible objects. This further illustrates that AI images strongly reflect our choices and biases.
Nanotechnology: Too Much Data
There are many different instances in which nanotechnology can benefit from the use of AI (cf. Sacha/Varona 2013). Keeping our focus on visualized data, I will now discuss images of bacteria mappings over a surface. The size of these microorganisms places them in the nanoscale territory. In the example I will discuss here, Nikiforov et al. (2009) attempt to identify two kinds of bacteria (M. lysodeikticus and P. fluorescens) based on their electromechanical response to PFM (Piezoresponse Force Microscopy). They emphasize that this method, unlike previous attempts to identify bacteria within images with AI, was not based on shape but on response to the PFM excitation and therefore works at a single pixel level. The produced images are shown in figure 3. The top left image (a) corresponds to the original PFM image, the rest are AI-generated mappings of the background (b) and the two types of bacteria (c and d). It can be seen that the large white spot near the bottom right corner of the original is not identified in either of the mappings. Perhaps optimization can further improve the performance of the AI but if secondary images such as (c) or (d) were to be used as input for further calculations, results would certainly be skewed.
Figure 3:
AI-assisted bacteria mappings from PFM input
(Nikiforov et al. 2009)
Our goal is not to judge the performance of the algorithm; perhaps selecting a greater area than the blue rectangle in (a) for training would help, perhaps not. An interesting detail, in any case, is provided by the authors: The mechanical properties of the two species of bacteria have not been studied. Therefore, the stiffness used in the model (a required parameter) was that of a different bacteria, P. aeruginosa, the “closest” species according to the authors (Nikiforov et al. 2009: 4). In other words, the resulting mappings are once again a product of a choice, especially considering the use of the term ‘closest’: What qualities make two bacteria species ‘close’? Biological? Mechanical? Visual?
Regarding the visual traits of these images, a final comment needs to be made. As explained, the original input was acquired utilizing PFM, an instrument that applies force to the specimens and visualizes the response. This means that the original data was not visual in nature but rather tactile. Like the black hole image, underlying data does not need to conform to our expectations of vision. This should always be kept in mind when dealing with scientific visualizations. The stake might seem unimportant when studying a handful of bacteria in preconditioned experiments, but this would quickly change if these techniques were used for diagnostic purposes (a promise nanotechnology keeps reminding us of every now and then).
Discussion: Errors and Context
Of course, most scientists working on AI solutions in scientific images are aware of the aforementioned issues, constantly trying to improve their methods and to justify their choices. But even so, choices have to be made by humans. Personal biases affect results, and this subjectivity inevitably inflicts some damage to the epistemic value of these images. This does not suggest that such methods should be rejected entirely; otherwise, research would grind to a halt. Making the underlying decisions explicit and retraceable should enjoy equal amounts of effort as the promotion of the conclusions.
This is especially true in two cases: when images are used as secondary data to further facilitate scientific research and when scientific images escape the academic realm and enter public media. The first is pretty self-explanatory: If generated data is allowed into scientific discourse, the validity of its findings has to be meticulously discussed and challenged. The second one is often fleeting our attention. Vincent Bontems mentions that, after serving their cognitive purposes, scientific images begin a second life cycle in popular media, exerting psychosocial influence:
Outside of the scientific field, images ‘die’ as scientific images: they are no longer defined as carriers of scientific information. But they live a new life, redefined by their aesthetic power and their association with other types of images from different fields (art, advertisement, entertainment, science fiction, etc.). Scientists should be (and may sometimes be) aware of this fact (Bontems 2011: 179).
Stressing that scientific images are first and foremost data, Douglas Cromey goes on to list a set of practices protecting the integrity of visual data in images, concluding that although cases of fraud have been reported, it is usually a lack of skill that results in inappropriate images or mistaken interpretations. In order to mitigate this,
[t]he first thing that needs to change is our mindset. We still tend to think of digital images as a ‘picture,’ when in reality they are data. Pictures are artwork that can be changed to suit our desire for how they are presented to others, while image data are numerical and must be carefully manipulated in a way that does not alter their meaning (Cromey 2012: 17).
Cromey insists on the need for a ‘code of conduct’ in image data processing because, while the development of any tool aiding the difficult sequence of data processing is welcome, it can sometimes be used irresponsibly or – worse – maliciously. More and more researchers such as Jinjin Gu et al. (2022) and Liansheng Wang et al. (2022) warn of cases of image fraud in scientific publications. And while we can understand (but not justify) mistakes occurring during experimental procedures under pressure, another narrative surrounding the use of AI is gaining momentum, one that is potentially even more dangerous: that AI fosters a “tech democratization” (O’Donnell 2023: n.pag.). Advocates of generative algorithms proclaim that such tools enable more people to engage with demanding tasks, such as painting, writing, coding, etc. This, by itself, is obviously commendable, especially considering that some people do not have access to higher levels of education. But let us not forget that if some people are currently able to create art themselves or code complex software it is because they went through rigorous training, often at the expense of their personal life or financial standing. A dissemination of AI tools will not change the fact that some people will consciously choose to dedicate more time and resources into learning how to use these tools. So, by indirectly suggesting that the time and effort put in by artists, creators, or programmers is somehow an un-democratic practice, the strive for excellence is equated to social injustice. The skills of scientists have not only been acquired through a time-consuming process (a personal investment for which they should not be ashamed), they are accompanied by the experience necessary to properly exercise them. In the case of scientific research at least, a ‘democratization’ could lead to an increased number of images generated in experiments and procedures of dubious epistemic value but of potentially great influence.
In both examples presented, the data under consideration was not visual at all in the first place. Applying visual properties to other types of data should be done with extreme caution, lest we risk getting off-topic. This is equally important when studying AI itself in science or other disciplines. Cristina Voto discusses the visualization of AI latent space in art and concludes that “it seems necessary to understand the meaning-effects these technologies enact while giving form to latent ideologies” (Voto 2022: 60). However, the ‘latent space’ she refers to is only an intermediate step in AI data processing and does not represent final outputs, a fact she acknowledges (“a step that usually remains invisible to the human eye”, Voto 2022: 47). We tend to agree with Cromey: Digital images are indeed data and should be treated as such. However, visualizations of otherwise not-visible phenomena have to be looked at with a more discerning eye.
Conclusions
By analyzing cases in astronomy and nanotechnology, we have highlighted concerns about the use of AI in scientific imaging which revolve around two main issues. First, although AI undeniably offers computational assistance, it still requires human input, contrary to what is often advertised. Researchers mostly document these choices, but their significance may be downplayed in favor of presenting a groundbreaking conclusion. Secondly, scientific images are often mere visualizations of not-visual data and therefore cannot (and should not) bear the same epistemic weight as deterministic visuals such as photographs, at least not in the way Susan Sontag addressed it. Generative image platforms like DALL·E, Midjourney, or Stable Diffusion are mostly discussed in terms of their creative potential, finding satisfactory results via ‘happy accidents’, or because of their abilities to mimic certain image styles. For these applications, the inner workings of said algorithms play a less important role – even though they are prone to reproduce social biases and inequalities, a problem of its own. However, there are areas where exact knowledge and a high degree of transparency is absolutely needed in order to ensure epistemic certainty. AI algorithms like the ones mentioned above will definitely play a greater role in the future, especially as the amount of scientific data produced constantly increases. Implications will become more significant unless generated imagery can be distinguished from ‘actual’ data. In that case, recognizing AI images and discussing their generative origin will become more important than their ability to accurately convey information. This does not mean that AI should be rejected altogether; it means that AI-generated images should be characterized by transparency regarding the way they came to be and receive careful treatment when they exit the scientific sphere and enter the public. We may feel at times that AI threatens the status quo of many human activities, art and creativity being among the first. The threat to science is less obvious but potentially more dangerous, especially if the widespread adoption of AI is seen as an important step in the ‘democratization’ of research. We should not forget that AI can only be harmful to the extent that we allow it to be.
Acknowledgments
This research is part of the author’s doctoral thesis which was co-financed by Greece and the European Union (European Social Fund-ESF) through the Operational Programme “Human Resources Development, Education and Lifelong Learning” in the context of the Act “Enhancing Human Resources Research Potential by undertaking a Doctoral Research” Sub-action 2: IKY Scholarship Programme for Ph.D. candidates in the Greek Universities.
References
Aristotle: The Complete Works of Aristotle: Revised Oxford Translation. Edited and translated by Jonathan Barnes. Princeton [Princeton University Press] 1998
Baird, Davis; Ashley Shew: Probing the History of Scanning Tunneling Microscopy. In: Davis Baird; Alfred Nordmann; Joachim Schummer (eds.): Discovering the Nanoscale. Amsterdam [IOS Press] 2004, pp.145-157
Bontems, Vincent K.: How to Accommodate to the Invisible? The ‘Halo’ of ‘Nano’. In: NanoEthics, 5(2), 2011, pp. 175-183
Bouman, Katherine L.; Michael D. Johnson; Daniel Zoran; Vincent L. Fish; Sheperd S. Doeleman; William T. Freeman: Computational Imaging for VLBI Image Reconstruction. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 913-922
Cromey, Douglas W.: Digital Images are Data: And should be Treated as such. In: Douglas J. Taatjes; Jürgen Roth (eds.): Cell Imaging Techniques. Vol. 931. Totowa [Humana Press] 2012, pp. 1-27
Doeleman, Shep: Focus on the First Event Horizon Telescope Results. In: The Astrophysical Journal Letters. April 2019. https://iopscience.iop.org/journal/2041-8205/page/Focus_on_EHT [accessed February 16, 2023]
Dondero, Maria G.; Jacques Fontanille: The Semiotic Challenge of Scientific Images: A Test Case for Visual Meaning. Translated by JulieTabler. New York [Legas] 2014
Gu, Jinjin; Xinlei Wang; Chenang Li; Junhua Zhao; Weijin Fu; Gaoqi Liang; Jing Qiu: AI-Enabled Image Fraud in Scientific Publications. In: Patterns, 3(7), 2022, pp. 1-6
Hawking, Stephen W.: Information Preservation and Weather Forecasting for Black Holes. arXiv:1401.5761. January 22, 2014. https://arxiv.org/abs/1401.5761 [accessed February 16, 2023]
McCarthy, John; Marvin L. Minsky; Nathaniel Rochester; Claude E. Shannon: A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. In: AI Magazine, 27(4), 2006, pp. 12-14
Nikiforov, Maxim P.; Vladimir V. Reukov; Gary L. Thompson; A.A. Vertegel; S. Guo; Sergei V. Kalinin; Stephen Jesse: Functional Recognition Imaging Using Artificial Neural Networks: Applications to Rapid Cellular Identification by Broadband Electromechanical Response. In: Nanotechnology, 20(40), 2009. doi:10.1088/0957-4484/20/40/405708
O’Donnell, Bob: The Surprise Winner for GenerativeAI. In: Techspot. January 18, 2023. https://www.techspot.com/news/97303-surprise-winner-generative-AI.html [accessed February 16, 2023]
Oren, Ohad; Bernard J. Gersh; Deepak L. Bhatt: Artificial Intelligence in Medical Imaging: Switching from Radiographic Pathological Data to Clinically Meaningful Endpoints. In: The Lancet Digital Health, 2(9), 2020, pp. e486-e488
Rodrigues, Cassiano T.: The Method of Scientific Discovery in Peirce’s Philosophy: Deduction, Induction, and Abduction. In: Logica Universalis, 5(1), 2011, pp. 127-164
Sacha, Gomez M.; Pablo Varona: Artificial Intelligence in Nanotechnology. In: Nanotechnology, 24(45), 2013. doi:10.1088/0957-4484/24/45/452002
Sachs-Hombach, Klaus: Acting with Pictures. In: Punctum: International Journal of Semiotics, 2(1), 2016, pp. 7-17
Sontag, Susan: On Photography. New York [Rosetta] 2005 [1977]
The Event Horizon Telescope Collaboration: First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole.
In: The Astrophysical Journal Letters 875: L1, 2019a, pp. 1-17
The Event Horizon Telescope Collaboration: First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole.
In: The Astrophysical Journal Letters 875: L2, 2019b, pp. 1-28
The Event Horizon Telescope Collaboration: First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole.
In: The Astrophysical Journal Letters 875: L3, 2019c, pp. 1-32
The Event Horizon Telescope Collaboration: First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole.
In: The Astrophysical Journal Letters 875: L4, 2019d, pp. 1-52
The Event Horizon Telescope Collaboration: First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole.
In: The Astrophysical Journal Letters 875: L5, 2019e, pp. 1-31
The Event Horizon Telescope Collaboration: First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole.
In: The Astrophysical Journal Letters 875: L6, 2019f, pp. 1-44
Voto, Cristina: From Archive to Dataset: Visualizing the Latency of Big Data. In: Punctum: International Journal of Semiotics, 8(1), 2022, pp. 47-62
Wang, Liansheng; Lianyu Zhou; Wenxian Yang; Rongshan Yu: Deepfakes: A New Threat to Image Fabrication in Scientific Publications? In: Patterns, 3(5), 2022, pp. 1-4
Xie, Charles; Hee-Sun Lee: A Visual Approach to Nanotechnology Education. In: International Journal of Engineering Education, 28(5), 2022, pp. 1006-1018
Footnotes
1 There is great debate over the types of logical reasoning and their contribution to scientific discoveries. Charles Sanders Peirce includes a third way of reasoning, abduction, and also concludes that no kind of reasoning offers absolute validity, just probability of validity (cf. Rodriques 2011). The subject is inexhaustible and certainly beyond our scope here.
2 The term light is used here in the broader sense of the word. In physics, light may refer to any kind of electromagnetic radiation, regardless of whether its frequency is within the human visible spectrum.
About this article
Copyright
This article is distributed under Creative Commons Atrribution 4.0 International (CC BY 4.0). You are free to share and redistribute the material in any medium or format. The licensor cannot revoke these freedoms as long as you follow the license terms. You must however give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. More Information under https://creativecommons.org/licenses/by/4.0/deed.en.
Citation
Konstantinos Michos: AI in Scientific Imaging. Drawing on Astronomy and Nanotechnology to Illustrate Emerging Concerns About Generative Knowledge. In: IMAGE. Zeitschrift für interdisziplinäre Bildwissenschaft, Band 37, 19. Jg., (1)2023, S. 165-178
ISSN
1614-0885
DOI
10.1453/1614-0885-1-2023-15468
First published online
Mai/2023