When you run kegga() in edgeR, the de argument is supposed to be a fitted model object (DGELRT or DGEExact). Please type help("kegga.DGELRT") for documentation.
You have seem to have instead input a data.frame. That will generally give an error because kegga() isn't intended to accept a data.frame. Even it doesn't give an error, the results will be nonsense because kegga will try to interpret each column of the data.frame as a vector of gene IDs.
goana() and kegga() are exactly the same in this respect. If kegga() gives an error, then goana() will also give an error if run on the same arguments.
I don't know what your data.frame represents. It looks like you might have run glmTreat and then extracted the data.frame from the table component of the topTags output. Wouldn't it be simpler to just run kegga and goana in the documented way? You can use
lrt <- glmTreat( ... )
k <- kegga(lrt)
g <- goana(lrt)
Apart from being correct, wouldn't that be easier? It cuts out several steps you seem to have done.
Sorry, but what you say is not right. You cannot give give a data.frame to either goana() or kegga(). It's just wrong, and it will generally give an error. Even if the functions don't generate an error, the results will still be nonsense.
Why not just follow the documented way to run these functions?
For goana() I have given a dataframe which looks like above. It worked. I don't why it is not working for kegga()
Sorry, but what you say is not right. You cannot give give a data.frame to either goana() or kegga(). It's just wrong, and it will generally give an error. Even if the functions don't generate an error, the results will still be nonsense.
Why not just follow the documented way to run these functions?