All posts by Omar

DLRG schedule for spring semester 2024
(GoogleMeet link: https://meet.google.com/wah-nwrt-pqv)

1) Sunday (Mar 3, 2024) presented by Roua’ and Sara, “ HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation”. doi: https://arxiv.org/abs/2305.13031

2) Sunday (Mar 17, 2024) presented by Israa, “ SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation”. doi: https://arxiv.org/abs/2209.15323

3) Sunday (Mar 31, 2024) presented by Fatima, “ IC-GAN: An Improved Conditional Generative Adversarial Network for RGB-to-IR image translation with applications to forest fire monitoring”. doi: 10.1016/j.eswa.2023.121962

4) Sunday (Apr 14, 2024) presented by Ahmad and Rahmeh, “Artificial Intelligence and Machine Learning in Clinical Medicine, 2023”. doi: 10.1056/NEJMra2302038

5) Sunday (Apr 28, 2024) presented by Abdel Rahman, “Mamba: Linear-Time Sequence Modeling with Selective State Spaces”. doi: https://arxiv.org/abs/2312.00752

6) Sunday (May 12, 2024) presented by Ala, “ … ”. doi: …


DLRG schedule for autumn semester 2023
(GoogleMeet link: https://meet.google.com/wah-nwrt-pqv)

1) Sunday (Nov 5, 2023) presented by Abdel Rahman, “Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering”. doi: https://arxiv.org/pdf/2303.01903.pdf

2)Sunday (Nov 19, 2023) presented by Fatima, “ARU-GAN: U-shaped GAN based on Attention and Residual connection for super-resolution reconstruction”. doi: 10.1016/j.compbiomed.2023.107316

3) Sunday (Dec 3, 2023) presented by Israa, “Improved Arabic image captioning model using feature concatenation with pre-trained word embedding ”.  doi: 10.1007/s00521-023-08744-1

4) Sunday (Dec 17, 2023) presented by Roua’ and Sara, “Stochastic Segmentation with Conditional Categorical Diffusion Models ”.  doi: https://arxiv.org/pdf/2303.08888.pdf

5) Sunday (Dec 31, 2023) presented by Ala, “ResNet-Swish-Dense54: a deep learning approach for deepfakes detection”. doi: 10.1007/s00371-022-02732-7


DLRG schedule for summer semester
(GoogleMeet link: https://meet.google.com/wah-nwrt-pqv)

1) Sunday (Jul 23, 2023) presented by Fatima, “ LAC-GAN: Lesion attention conditional GAN for Ultra-widefield image synthesis ”. doi.org/10.1016/j.neunet.2022.11.005

2) Sunday (Jul 30, 2023) presented by Abdel Rahman, “ NoisyTwins: Class-Consistent and Diverse Image Generation through StyleGANs”. https://arxiv.org/pdf/2304.05866.pdf

3) Sunday (Aug 6, 2023) presented by Israa, “ Image-Text Embedding Learning via Visual and Textual Semantic Reasoning”. doi: 10.1109/TPAMI.2022.3148470

4) Sunday (Aug 13, 2023) presented by Roua’ and Sara, “ Segmentation ability map: Interpret deep features for medical image segmentation”. doi.org/10.1016/j.media.2022.102726

5) Sunday (Aug 20, 2023) presented by Bara’a, “ SECS: An Effective CNN Joint Construction Strategy for Breast Cancer Histopathological Image Classification”. doi.org/10.1016/j.jksuci.2023.01.017


Schedule for Spring semester
(GoogleMeet link: https://meet.google.com/wah-nwrt-pqv)

1) Sunday (Mar 12, 2023) presented by Israa, “Image Captioning for Effective Use of Language Models in Knowledge-Based Visual Question Answering”. doi.org/10.1016/j.eswa.2022.118669

2) Sunday (Mar 26, 2023) presented by Ala, “Towards Personalized Federated Learning”. doi.org/10.1109/TNNLS.2022.3160699

3) Sunday (Apr 9, 2023) presented by Fatima : “TCGAN: a transformer-enhanced GAN for PET synthetic CT”. doi: https://doi.org/10.1364/BOE.467683

4) Sunday (Apr 30, 2023) presented by Roua’ and Sara, “A novel deep learning model DDU-net using edge features to enhance brain tumor segmentation on MR images”. doi.org/10.1016/j.artmed.2021.102180

5) Sunday (May 7, 2023) presented by Abdel Rahman, “ViXNet: Vision Transformer with Xception Network for deepfakes based video and image forgery detection. doi.org/10.1016/j.eswa.2022.118423.

6) Sunday (May 21, 2023) presented by Bara’a, “Federated Fusion of Magnified Histopathological Images for Breast Tumor Classification in the Internet of Medical Things”. doing.org/10.1109/JBHI.2023.3256974

Web Application Security (1904710)
Semester overall marks are distributed as: midterm exam 30% (first MCQ exam 16%, second hands-on exam 14%), homework and assignments 10%, and term paper 20% (research paper 10%, oral presentation 10%).

Std ID MT2 (16) MT2(14) Midterm(30) H/W(10) Paper(10) Presnt.(10) Overall(60)
8190292 14 12 26 8.5 9 9 53
8191078 11 7 18 8.5 10 10 47
8191407 5 14 19 10 8 10 47
8191525 12 5 17 7 9 8 41
8191640 10 6 16 8.5 10 9 44
8200210 5 3 8 7 8 9 32
8200239 9 3 12 8.5 7 8 36
8200243 13 5 18 8.5 9 10 46
8200247 11 4 15 8.5 10 10 44
8200268 8 8 16 8.5 6 0 31
8200493 13 7 20 10 9.5 9 49
8201181 8 6 14 7 7 9 37
8201191 13 3 16 8.5 10 10 45
8201211 16 5 21 7 9 9 46
8201245 10 4 14 4 6 8 32
8201262 9 4 13 8.5 9 8 39
8210294 7 0 7 7 0 0 14
8210306 13 3 16 8.5 9 8 42

Unpaired MR-CT Brain Image Dataset for Unsupervised Image Translation

The Magnetic Resonance – Computed Tomography (MR-CT) Jordan University Hospital (JUH) dataset has been collected after receiving Institutional Review Board (IRB) approval of the hospital and consent forms have been obtained from all patients. All procedures has been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki).

The dataset consists of 2D image slices extracted using the RadiAnt DICOM viewer software. The extracted images are transformed to DICOM image data format with a resolution of 256×256 pixels. There are a total of 179 2D axial image slices referring to 20 patient volumes (90 MR and 89 CT 2D axial image slices). The dataset contains MR and CT brain tumour images with corresponding segmentation masks. The MR images of each patient were acquired with a 5.00mm T Siemens Verio 3T using a T2-weighted without contrast agent, 3 Fat sat pulses (FS), 2500-4000 TR, 20-30 TE, and 90/180 flip angle. The CT images were acquired with Siemens Somatom scanner with 2.46mGY.cm dose length, 130KV voltage, 113-327 mAs tube current, topogram acquisition protocol, 64 dual source, one projection, and slice thickness of 7.0mm. Smooth and sharp filters have been applied to the CT images. The MR scans have a resolution of 0.7×0.6×5 mm3, while the CT scans have a resolution of 0.6×0.6×7 mm3. MR-CT Dataset can be downloaded from here.

MR-to-CT & CT-to-MR
Figure: Bidirectional MR-CT translation results of unpaired models (a) input, (b) ground truth, (c) cycleGAN, (d) dualGAN, (e) discoGAN, and (f) uagGAN, respectively.

 

 

*Please note that this data is the property of the Jordan University Hospital and is made available for download for research purposes only. Users are kindly requested to acknowledge the source of this data (https://doi.org/10.1016/j.dib.2022.108109 and implemntation in https://doi.org/10.1016/j.compbiomed.2021.104763) if used for any publication.

Radio-Freqency Ultrasound volume dataset for pre-clinical liver tumors

A total of 227 cross sectional images (20 x 54 mm with a resolution of 289 x 648 pixels) of hind-leg xenograft tumors from 29 mice were obtained with 1mm step-wise movement of the array mounted on a manual positioning device. The whole tumor volume was acquired using a diagnostic ultrasound system with a 10 MHz linear transducer and 50 MHz sampling. The unprocessed ultrasound radio-frequency data can be downloaded along with the liver tumor segmentation masks from here*.

The dataset images are in Analyze 7.5 file format consisting of two files: a header with information about the size and number of voxels in each dimension (filename extension .hdr), and the actual data in binary format (filename extension .img). The images in the dataset are formatted according to the following protocol: CASE NUMBER | FILE TYPE (LOC OR SEG) | FILE EXTENSION | where loc is the original image and seg is the associated segmentation mask. The volumetric data are categorised into progressive versus non-progressive cases in response to chemotherapy treatment.

The example_case folder contains an output of the 3D Multi-fractal Nakagami Feature descriptor (MNF) algorithm when applied to one of the liver tumour cases, see below. The group of slice images for a particular case where transformed to corresponding Nakagami shape and scale parametric images, and animated GIF images representing the fractal slice maps and a video of the corresponding fractal volume maps were estimated and included as well.

RF Ultrasound fractal maps
Figure showing an RF ultrasound tumour image and the corresponding multi-resolution fractal slice maps illustrating how the case responds to chemotherapy treatment – the blue color regions.
 

*Please note that this data is the property of the Institute of Biomedical Engineering – University of Oxford and is made available for download for research purposes only. Users are kindly requested to acknowledge the source of this data and the paper below if used for any publication. All studies were ethically approved and performed in line with UK Home Office regulations, and in accordance with personal and project licenses.

O. S. Al-Kadi, Daniel Y.F. Chung, Robert C. Carlisle, Constantin C. Coussios, J. Alison Noble, “Quantification of ultrasonic texture intra-heterogeneity via volumetric stochastic modeling for tissue characterization” Medical Image Analysis, 2014.
(DOI: http://dx.doi.org/10.1016/j.media.2014.12.004).

E-mail: o.alkadi[at]ju.edu.jo
webpage last updated 31 July 2014