You may have noticed that we’re steadily adding new content to our Evaluation Center pages (check out our CRM page to see what I’m talking about). We’ll be giving them a new look soon, but for now we’re focusing on refreshing the text for each page. Read the rest of this entry »
Acronyms seem really convenient, at first.
It’s great using ERP instead of enterprise resource planning, for example.
You save precious time (not typing enterprise resource planning a million times) and there’s no way your reader won’t understand what ERP means, right?
Well, for ERP this may be true, but all acronyms are not as tidy as they appear.
What about BPM, for example? Does it stand for business process management or business performance management or business process modeling?
Aha…the troubles begin. Read the rest of this entry »
I just read Khoi Vinh’s quacking cow dolphin post (by way of Nicholas Carr’s blog) about how unfriendly he thinks enterprise software is (both posts are generating insightful commentary). Vinh makes a point about enterprise applications not receiving the same sort of wide-spread critiques that popular commodity applications do. He attributes this to the idea that the software is used by a less-varied base of people, which aren’t very likely to be merciless with their feedback. He says
“Shielded away from the bright scrutiny of the consumer marketplace and beholden only to a relatively small coterie of information technology managers who are concerned primarily with stability, security and the continual justification of their jobs and staffs, enterprise software answers to few actual users.”
I suppose that could often be the case, but it doesn’t have to be that way. I can think of at least two ways that situation doesn’t have to come to pass. Read the rest of this entry »
I received an e-mail notice today about Cofundos, a “community innovation & funding” site, which launched last week. Cofundos looks like one possible solution to an often murky area in the open source software space: how to continue fueling development.
Suppose you find some open source software useful but it doesn’t have commercial backing devoting regular developers (for example, Red Hat or Compiere) to its well-being, and suppose you don’t want to employ developers internally to improve, fix, or modify it, then no matter its utility, a lot of people or companies might be wary about relying on it in any larger-scale sense–who can they go to if they have a problem? Read the rest of this entry »
Like the old tootsie pop ads that ask how many licks it takes to get to the center, how many annoyances does it take to get people and businesses to change desktop operating systems? Aside from the frequent crunch of bloggers discussing their switch from Windows to Linux, we’re still waiting to find out.
In the news this week, Microsoft annoyed a number of admins with its Windows Desktop Services update. This added to many peoples’ perception that Microsoft pushes some of its updates without asking–an aggressive practice disliked for policy, security, troubleshooting, or other reasons. Read the rest of this entry »
How do you figure out, from within a large range of software vendors, which vendors to start evaluating? I’m curious to see some feedback on what most people use to start researching and narrowing down their list of software vendors before going into an RFI process.
A few years ago we were thinking about this issue and came up with the idea of a preselection questionnaire that could narrow down the list of vendors you’d want to look at. It has evolved and works relatively well, but after a few years it’s good to reconsider how it works and see if we can improve based on what we’ve learned, and on what people suggest.
No matter the methods of identifying vendors, you can usually find some common ground underlying them, which might be used as high-level preselection criteria. The following three examples show that even if you don’t use a formal process to identify vendors for evaluation, you still have to come up with a few high-level criteria. Read the rest of this entry »
Take note if you’re evaluating software for any of the following types of systems.
We recently published updated ratings on a number of vendors’ products. Individual reports are available for purchase, or better you can review the ratings in-depth using a free evaluation centers trial. Here’s a quick rundown of the updates.
Knowledge Management Solutions’ KMx product, which is an integrated e-learning package, is up-to-date as of its 4.3 version in the Learning Management Evaluation Center.
Retalix targets companies with retail and distribution requirements. Depending on what your company does, you can view its products’ functionality based on our ERP - Distribution, SCM, Merchandising, or POS models of enterprise software.
Finally, the latest information on Sage SalesLogix is available in our CRM Evaluation Center. It covers a 30% change from the previous ratings and shows new or increased support for over fifty features.
Because we continuously update our knowledge bases with new ratings and research, I’ll make an effort to publish short notes like these periodically.
This Is For All Us Writers Out There: Oh, and All Us Readers Too!
Do you ever feel like you need a jargon buster just to understand what some companies are saying about their software products?
I know I’ve needed one, and often still do.
I am a content writer and editor for TEC and the learning curve was pretty steep when I started. I mean what is functionality, scalability, dynamic lead time, run time, and then there are features and functions…enough to boggle the mind!
How many people really know what these words actually mean?
Not what they think these words might mean or what they sound like they mean in a certain context, but what they really and truly mean.
Well, it’s part of my job to know. And if I can’t explain it in plain English, I can’t use it.
And how many times have I read a white paper and realized that if all the buzzwords were removed it would be half the length (and comprehensible).
I’ve collected some great examples along the way.
This post is not an oxymoron. The Open Source Initiative recently approved two Microsoft licenses (the Microsoft Reciprocal License and the Microsoft Public License) as compliant with the open source definition.
Why would Microsoft want to publish an open source license? The very idea of Microsoft participating in the open source community might sound odd. After all, hasn’t Microsoft been one of the most vocal proprietary vendors against free and open source software? Isn’t Microsoft known for its attempts to undermine open source standards? Often yes, but the company has also been dabbling, to various degrees, with open source for a while (its FlexWiki application is one example). Read the rest of this entry »
Editors’ Picks: Vendors submit. We review.
… white papers from whitepapers.technologyevaluation.com.
Editor A (the nominally genial one)
This one caught my eye as it crossed our desks a while ago.
“What number?” I asked. “A hundred? One? Pi? Do tell me more.”
Sometimes you don’t want to read the glowing pros or vicious cons about how vendors address 1433 separate business intelligence software criteria. I think it was for those times that my colleagues came up with the idea of writing a short type of report (the vendor showdown) that graphically demonstrates how different enterprise products stack up, based on a few key high-level criteria.
The recently published BI showdown garnered some strange scorn from readers firing comments at the article. A few thought it was talking about the top three vendors as opposed to three of the top vendors. Some, like in the following comment, thought it needed more detail.
“These ‘results’ offer no information that would give a decision maker any tools to help in the selection process. I guess you get what you pay for. “
In the showdown however, my colleagues write “Your company has distinct needs and priorities that need to be supported by any enterprise solution you adopt.” They’re referring to what you’d want from those 1433 criteria we use to analyze the BI software. They’re recommending you actually do go further than just the showdown and examine the functionality in a way that makes sense for your requirements (in other words, use the BI Evaluation Center). I suppose we need to make this point more clear in future reports.
Still, I’m not sure why some people discounted analyst Lyndsay Wise’s insightful examination of why the vendors scored as they did. She brought up interesting points that you wouldn’t know just from looking at rating scores (or say, a Magic Quadrant).
The showdown offers an overview type of analysis, which is based on 1433 criteria without asking you to consider each one. While it is useful for an idea of the functional areas on which the products tend to focus, its graphs are a product of our evaluation tools, not the tools themselves.
A few online tools make it easy to compare criteria about software, side-by-side. Of course, you probably expect that I think TEC provides the mother of all evaluation tools for comparisons (true). But this is about some of the other guys. Two sites I like, which I recently came across might be useful to you if you’re scanning the horizon for high-level comparison info. The first is Opteros’s EOS Directory and the second is ITerating. Both approach the issue in different, complementary directions to TEC’s. Here’s a bit of what’s interesting about their approaches and why I think they can offer valuable supplementary information.
It’s a bit surprising that sales teams from some ERP vendors are still under the impression that simply wining and dining a customer is enough to win a sale. It’s this type of hubris that can cost vendors entire projects.
Recently, I was helping with a customer’s software evaluation and selection process. Yes, we have products and solutions that extend beyond the simple self-service tool usage we offer on our website. For this project, TEC was brought onboard to help conduct a comprehensive evaluation and selection process, following our methodology.
This means we looked at vendor RFI data in our software and augmented it with their unique requirements to get to a shortlist. With the shortlist, we looked at the vendors’ market information (for which we have a template), and then added other evaluation components including vendor scripted demos, performance and scale, ease of use, and reference checks. Conceptually we have to take the easily quantified elements, and supplement it with measurable qualitative factors. Some of this work was done on-site, some of it was done remotely, but at the expense of making this discussion too verbose, I’ll focus on our services related to evaluating their finalists.