0

I'm trying to use the EBImage package to calculate objects in images. The following code works. However, because I have a relatively large dataset (about 1000 images) repeating this code 1000 times can be really exhausting. I need to make the code more effective (by minimizing the workload), and I wonder if you could help me to simplify the process. My specific question is, can I process all the images using the commands only once (for all the images) and get a list of numbers that show the results for "max" for each image?

#
 library(EBImage)

#Reading Images 
a <- readImage("https://scontent.cdninstagram.com/vp/8b4999eb71b5381b5b68d4bef635630e/5B735DE7/t51.2885-15/sh0.08/e35/p640x640/29415832_1622274641183922_9158751921719214080_n.jpg")
b <- readImage("https://scontent.cdninstagram.com/vp/2aa22b810fabd2478065503c9a921daf/5B64459B/t51.2885-15/e35/29739757_359646237856246_7485780478239178752_n.jpg")
c <- readImage("https://scontent.cdninstagram.com/vp/796f4ee2e02b012969d778f5fda4510e/5B569B51/t51.2885-15/s640x640/sh0.08/e35/29416798_1877911555555364_9109288458408427520_n.jpg")
d <- readImage("https://scontent.cdninstagram.com/vp/ae53b00be6dd5babfc1c02322d3be640/5B4EC8A5/t51.2885-15/s640x640/sh0.08/e35/29739069_266004773938331_5594342353562238976_n.jpg")

a = channel(a, "gray")
at = a > 0.55
labelsat = bwlabel(at)
max(labelsat) 

b = channel(b, "gray")
bt = b > 0.55
labelsbt = bwlabel(bt)
max(labelsbt)

c = channel(c, "gray")
ct = c > 0.55
labelsct = bwlabel(ct)
max(labelsct)

d = channel(d, "gray")
dt = d > 0.55
labelsdt = bwlabel(dt)
max(labelsdt)
############################

Thanks in advance

Chamil Rathnayake

1 Answers1

0
urls <- c(
  "https://scontent.cdninstagram.com/vp/8b4999eb71b5381b5b68d4bef635630e/5B735DE7/t51.2885-15/sh0.08/e35/p640x640/29415832_1622274641183922_9158751921719214080_n.jpg",
  "https://scontent.cdninstagram.com/vp/2aa22b810fabd2478065503c9a921daf/5B64459B/t51.2885-15/e35/29739757_359646237856246_7485780478239178752_n.jpg",
  "https://scontent.cdninstagram.com/vp/796f4ee2e02b012969d778f5fda4510e/5B569B51/t51.2885-15/s640x640/sh0.08/e35/29416798_1877911555555364_9109288458408427520_n.jpg",
  "https://scontent.cdninstagram.com/vp/ae53b00be6dd5babfc1c02322d3be640/5B4EC8A5/t51.2885-15/s640x640/sh0.08/e35/29739069_266004773938331_5594342353562238976_n.jpg"
)

# load the images once and save them
images <- lapply(urls, readImage)
# if there are a lot, consider save(images, file="images.rda") or something similar

myfunc <- function(img) {
    a = channel(img, "gray")
    at = a > 0.55
    labelsat = bwlabel(at)
    return(max(labelsat))
}
sapply(images, myfunc)
r2evans
  • 77,184
  • 4
  • 55
  • 96
  • Thanks very much. Really appreciate the quick reply. – Chamil Rathnayake Apr 05 '18 at 23:32
  • It's a frequent question and commonly under-utilized efficiency within R. It also works well with lots of CSV files and frames (http://stackoverflow.com/a/24376207/3358272). It's almost cookie-cutter to apply some of these problems, and once a programmer perceives problems with this kind of mindset, scaling becomes a lot easier. – r2evans Apr 05 '18 at 23:39