Question: Error: no applicable method for `RunTSNE` applied to an object of class "try-error"
0
gravatar for moldach
5 weeks ago by
moldach20
Canada/Montreal/Douglas Mental Health Institute
moldach20 wrote:

I'm running into issues trying to catch errors thrown from Seurat::RunTSNE() inside of a function. When users input a small dataset we've noticed that this function will fail with the error:

Error in .checktsneparams(nrow(X), dims = dims, perplexity = perplexity, : perplexity is too large for the number of samples

To handle this condition I would like to run an alternative function and give the user a warning.

Typically one does so using try() or tryCatch(), for example:

exception_handling <- function() {
  tryCatch(
    expr = {
      message(RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE))
      message("Successfully executed the RunTSNE call.")
    },
    error = function(e) {
      message("Caught an error!")
      print(e)
    },
    warning = function(w) {
      message("Caught a warning!")
      print(w)
    },
    finally = {
      message("Reduce perplexity")
      warning_msg <- "Lowering perplexity to the lowest recommendation: 5."
      print(warning_msg)

      # Works with some functions
      lmfit <- lm(mpg ~ wt, mtcars)
      return(lmfit)
    }
  )
}

The output from exception_handling() is:

Call:
lm(formula = mpg ~ wt, data = mtcars)

Coefficients:
(Intercept)           wt  
     37.285       -5.344 

However, I run into an error when trying this with Seurat::RunTSEN():

exception_handling2 <- function(x) {
  tryCatch(
    expr = {
      message(RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE))
      message("Successfully executed the RunTSNE call.")
    },
    error = function(e) {
      message("Caught an error!")
      print(e)
    },
    warning = function(w) {
      message("Caught a warning!")
      print(w)
    },
    finally = {
      message("Reduce perplexity")
      warning_msg <- "Lowering perplexity to the lowest recommendation: 5."
      print(warning_msg)

      # Doesn't work for some function?
      dataSet_tSNE <- RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE, perplexity = 5)
      return(dataSet_tSNE)
    }
  )
}

Error in UseMethod(generic = "RunTSNE", object = object) : no applicable method for RunTSNE applied to an object of class "try-error"

Ultimately, I think I want to run a while-loop that reduces the number of perplexity until the function runs successfully (but not sure if this is the optimal way to do it)....

Maybe something like this:

dataSet <- try(RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE))
  while (class(dataSet) == "try-error") {
          cat("Caught an error relating to 'perplexity', reducing the number from default = 50")
          for (i in 49:5) {
                  dataSet <- RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE, perplexity = i)
                  }
          }
ADD COMMENTlink modified 5 weeks ago by Martin Morgan ♦♦ 24k • written 5 weeks ago by moldach20
Answer: Error: no applicable method for `RunTSNE` applied to an object of class "try-err
0
gravatar for Martin Morgan
5 weeks ago by
Martin Morgan ♦♦ 24k
United States
Martin Morgan ♦♦ 24k wrote:

Seurat isn't a Bioconductor package and your example isn't reproducible. However, for your use of tryCatch()...

An 'error' stops a calculation, whereas a 'warning' should be handled and the calculation continued. One usually uses tryCatch() for errors, but withCallingHandlers() for warnings.

Since the finally clause is always run, if one runs

i <- 0
tryCatch({
    i <- i + 1
    if (i == 1L)
        stop("oops")
}, finally = {
    i = i + 1
})

then the result is i = 2. Changing the initial value to i = 1, so the error doesn't occur, then result is i = 3. So you don't want to 'recover' in the finally= clause; a typical use might be to close a file connection that you opened in expr and that needs to be closed whether an error occurs or not.

If one wanted to try a value and then provide a 'recovery' value in case of error, one might write something like

f <- function(i) {
    tryCatch({
        if (i == 2)
            stop("oops")
        i
    }, error = function(e) {
        warning(e)                      # record error as warning
        NA                              # return NA
    })
}

f() returns its argument, unless an error occurs in which case it returns NA. It works as

> sapply(1:5, f)

[1]  1 NA  3  4  5
Warning message:
In doTryCatch(return(expr), name, parentenv, handler) : oops

This seems to be close to what you would like to do -- try a value of RunTSNE(), and if it fails recover in some way or another.

Can you take these ideas and come up with a second iteration of your approach?

ADD COMMENTlink written 5 weeks ago by Martin Morgan ♦♦ 24k

Hi Martin,

I didn't realize Seurat wasn't under the Bioconductor umbrella and apologize for not including a reprex - I've done so now.

Thank you very much for providing a very helpful answer anyways!

library(dplyr)
library(Seurat)
library(tibble)
# Download ~11Mb file - subset from SRA653146
download.file(url = "https://www37.zippyshare.com/d/6WAzFBtp/16064/SRA653146_subset.csv", destfile = "SRA653146_subset.csv") 

# read in raw counts
raw_counts <- read.csv("SRA653146_subset.csv")

# remove genes that are not expressed in any cell
raw_counts <- raw_counts[, colSums(raw_counts != 0) > 0]

# convert to matrix
rownames(raw_counts) <- raw_counts[,1]
raw_counts <- raw_counts[,-1]

# convert to Seurat object
dataSet <- CreateSeuratObject(counts = raw_counts, min.cells = 3, min.features = 200)

dataSet[["percent.mt"]] <- PercentageFeatureSet(dataSet, pattern = "^MT-") # add percentage mitochondria

# subset Seurat object
dataSet <- Seurat:::subset.Seurat(dataSet, subset = nFeature_RNA > 200 & nFeature_RNA < 2500 & percent.mt < 5)

# normalize the data
dataSet <- Seurat::NormalizeData(dataSet, normalization.method = "LogNormalize", scale.factor = 10000)

# find highly variable features
dataSet <- FindVariableFeatures(dataSet, selection.method = "vst", nfeatures = 2000)

# scale and cluster
all.genes <- rownames(dataSet)
dataSet <- ScaleData(dataSet, features = all.genes)
dataSet <- RunPCA(dataSet, features = VariableFeatures(object = dataSet))
dataSet <- FindNeighbors(dataSet, reduction = "pca", dims = 1:20)
dataSet <- FindClusters(dataSet, resolution = 0.5, algorithm = 1)


f <- function() {
  tryCatch({
    if(dataSet <- RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE, perplexity = 50))
      stop("oops")
  }, error = function(e) {
    warning(e)                      # record error as warning
    dataSet <- RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE, perplexity = 5)
    dataSet
  })
}

dataSet_tSNE <- f()

This suggestion to provide a 'recovery' value (or function) in case of an error is almost 100% of the way there. If a perplexity value of 50 results in the error perplexity is too large for the number of samples a back-up function runs.

Ideally I would like a vectorized solution in the error = function(e){} that keeps trying lower values of perplexity until "it works", something like your sapply(49:5, f) example - although I cannot figure out how to implement this.

Only how to wrap multiple tryCatch()'s together but this is not optimal if your recursively trying perplexity parameters from 50 -> 5....

f2 <- function() {
  tryCatch({
    if(dataSet <- RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE, perplexity = 50))
      stop("oops")
  }, error = function(e) {
    warning(e)                      # record error as warning
    tryCatch({
      if(dataSet <- RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE, perplexity = 49))
        stop("oops2")
    }, 

   . . . # etc.

error = function(e) {
    warning(e)
    dataSet <- RunTSNE(object = dataSet, dims.use = 1:10, do.fast = TRUE, perplexity = 5)
    dataSet
  })
  })
}
ADD REPLYlink written 5 weeks ago by moldach20

If I had a function that only 'worked' for some small value of i, e.g.,

f <- function(i) {
    if (i < 10) {
        "ok"
    } else {
        stop("yuck")
    }
}

Naively I could write a loop that tried larger and then smaller values until it 'worked'

i <- 50
repeat {
    result <- tryCatch({
        f(i)
    }, error = function(e) {
        NA
    })
    if (!is.na(result))  # success!
        break            # exit loop
    i <- i - 5           # try new value
}

and after the loop i would be the first value to succeed

> i
[1] 9

A better approach would be to write a function that calls the function you're interested in, but returns a numerical value indicating how close one is to the 'best' result (e.g., largest value that returns a computed result)

g <- function(i) {
    tryCatch({
        f(i)
        i
    }, error = function(e) {
        0
    })
}

You can see how g() evaluates for different values of i -- it's maximized at the largest value below 10.

> sapply(1:20, g)
 [1] 1 2 3 4 5 6 7 8 9 0 0 0 0 0 0 0 0 0 0 0

We can find this maximum efficiently using optimize() or other base R functions (e.g., nlm(), uniroot(), perhaps re-defining g()), using

> optimize(g, c(1, 20), maximum = TRUE)
$maximum
[1] 9.999995

$objective
[1] 9.999995

I think though you should reflect on the overall strategy of choosing perplexity in this way.

ADD REPLYlink written 5 weeks ago by Martin Morgan ♦♦ 24k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 16.09
Traffic: 215 users visited in the last hour