Quick Take: Antsle coming out with Xeon-D models with 10GbE in December

Welcome back to rsts11. Earlier this year you saw us post a first look at the Antsle “personal cloud” development systems, which provide a fanless, silent development and desktop cloud-style provisioning environment with the KVM hypervisor and Linux Containers (LXC).

Later, we built a system that approximated our view of the obvious evolution of Antsle’s model, albeit not fanless (thus not completely silent), and not as compact. We used the SuperMicro X10SDV-4C-TLN2F-O 4-core, 8-thread board that featured dual 10GbE copper ports and support for 64GB non-registered or 128GB registered memory.

Well, Antsle announced today that they will be releasing Xeon-D based models in mid December.

antsle-announcement-tweet

Their low-end machine, with similar specs to the 4-Core board we used, starts at $1,349. Models with 8-Core and 12-Core boards are also available.

antsle-xd-models

The prices jump more than the difference in board cost because the base RAM/SSD configurations also grow, as do the uplift options.

  • antsle one XD: $1,349 for 4-core, 16GB (upgradable to 32GB), 2x 256GB Samsung 850 Pro SSD
  • antsle one XD Pro: $2,499 for 8-core, 32GB (upgradable to 64GB), 2x 512GB Samsung 850 Pro SSD
  • antsle one XD Ultra: $4,499 for 12-core, 64GB (upgradable to 128GB), 2x 1TB Samsung 850 Pro SSD
  • The Avoton-based systems are still listed, starting at $759, and if you register for their mailing list, you will probably get occasional promotions and discount offers. You can also watch their social media profiles (Twitter, Facebook) for some of these offers.

We still haven’t ordered one of the Antsle boxes due to shifting project budgeting, but the idea still has promise. And they don’t seem to do eval boxes (although if they change their minds, we’d love to try one out).

As we noted in our original take on the antsle model, you can probably build something similar on your own, and if you find it worthwhile and/or practical to spend time building the hardware and software platform, you’ll probably have lower capital expense building it yourself. If you just want to plug a silent box in, plop it onto your desk, and go to work, the nominal added cost for the pre-built appliance is probably worth spending.

Have you tried the antsle platform, or built your own similar system? Let us know in the comments.

 

Disclosure: While I’ve had an email exchange with the CMO of antsle prior to writing the original antsle post in March 2017, I don’t get any consideration from antsle for discussing their product. And while it is relatively resilient (mirrored SSDs, ECC RAM), I wouldn’t recommend it for an enterprise deployment into production. But then, it’s explicitly not aimed at that market.

 

Advertisements

Quick take: A conversation with Opengear at Cisco Live US in Las Vegas

Some more content will be coming as work changes settle in, but I wanted to share this video with my readers.

I’ve been a fan of Opengear for many years, and they sponsored the Tech Field Day Xtra at Cisco Live US 2013 in Orlando that made it possible for me to attend my first CLUS event.

Having met with them at the last few Interop events, and covering the new infrastructure manager box with Ethernet switching built-in, I was pleased to be invited to talk with them for a couple of video features around the Opengear story, the evolution of console server/terminal server technology, and some more general technology perspectives.

The first video was posted today… I’ll update this post with others as they come out. And hopefully I’ll be seeing many of you at Cisco Live US 2018, going back to Orlando.

Looking back on InteropITX 2017 – the good, the bad, and the future

My fifth Interop conference is in the books now. Let’s take a look back and see how it turned out, and where I think it will go next year. See disclosures at the end if you’re into that sort of thing.

Ch-ch-ch-changes…

The event scaled down this year, moving down the strip to the MGM Grand Conference Center after several years at Mandalay Bay. With the introduction of a 30-member advisory board from industry and community to support the content tracks, Interop moved toward a stronger content focus than I’d perceived in past events.

The metrics provided by Meghan Reilly (Interop general manager) and Susan Fogarty (head of content) showed some interesting dynamics in this year’s attendance.

The most represented companies had 6-7 attendees each, as I recall from the opening callouts, with an average of about 2 people per company. More than half of the attendees were experiencing Interop for the first time, and nearly two thirds were management as opposed to practitioners.

The focus on IT leadership, from the keynotes to the leadership and professional development track for sessions, was definitely front and center.

How about that content?

Keynotes brought some of the big names and interesting stories to InteropITX. There wasn’t always a direct correlation, but there was some interesting context to be experienced, from Cisco’s Susie Wee talking about code and programmability in an application world (and getting the audience to do live API calls from their phones), to Kevin Mandia of Fireeye talking about real world security postures and threat landscapes. Andrew McAfee brought the acronym of the year to the stage, noting that often the decisions in companies are not made by the right person, but the HiPPOs — Highest Paid Person’s Opinion.

With five active tracks, there was content for everyone in the breakouts this year as well. Some tracks will need larger rooms next year (like the Packet Pushers Future Of Networking, which seemed to demand software-defined seating when I tried to get in) and others may need some heavier recruiting.

Attendees can access the presentations they missed (check your Interop emails), and some presentations may have been posted separately by the presenters (i.e. to Slideshare or their own web properties) for general access. Alas, or perhaps luckily, the sessions were not recorded, so if you haven’t heard Stephen Foskett’s storage joke, you’ll have to find him in person to experience it.

Panic at the Expo?

But the traditional draw of Interop, its expo floor (now called the Business Hall), was still noteworthy. With over a hundred exhibitors, from large IT organizations like VMware to startups and niche suppliers, you could see almost anything there (except wireless technology, as @wirelessnerd will tell you about here). American Express OPEN was even there again as well, and while they couldn’t help with fixing Amex’s limited retort to Chase Sapphire Reserve (read more about that on rsts11travel if you like), they were there to help business owners get charge card applications and swag processed.

The mega-theatre booths of past years were gone, and this year’s largest booths were 30×30 for VMware and Cylance among others.

Some of the big infrastructure names were scaled way back (like Cisco, with a 10×10 along with a Viptela 10×10 and a Meraki presence at the NBASET Alliance booth) or absent (like Dell, whose only presence was in an OEM appliance reference, and HPE, who seem to have been completely absent).

These two noteworthy changes to the expo scene were probably good for the ecosystem as a whole, with a caveat. With a more leveled playing field in terms of scale and scope, a wider range of exhibitors were able to get noticed, and it seemed that the booth theatre model and the predatory scanner tactics were mostly sidelined in favor of paying attention to people who were genuinely interested.

The caveat, and a definite downside to the loss of the big names, was that Interop was one of the last shows that gave you a chance to see what the “Monsters of IT Infrastructure” were doing, side by side, in a relatively neutral environment. For this year at least, VMworld is probably as close as you will get to the big picture.

Some of this may have to do with the conference ecosystem itself; Dell EMC World was the previous week in Las Vegas, with HPE Discover the first full week of June and Cisco Live US the last full week of June. These events often occupy speakers and exhibition staffs for weeks if not months beforehand, and the big players also had events like Strata Hadoop World in London to cope with as well. (See Stephen Foskett’s Enterprise IT Calendar for a sense of the schedule.)

Will the “Monsters of IT” come back next year?

I’d like to see them return, as fresh interest and opportunity is a good way to sustain growth, but I have a feeling that focusing on their owned-and-operated events and away from the few (one?) remaining general IT infrastructure event is likely to continue. They may just field speakers for the content tracks and assume that people will come to them anyway.

Meanwhile,  smaller players will continue to grow. While they appear to just be nipping at the heels of the big players, they’re building a base and a reputation in the community, and they don’t need to beat the Cisco/Dell/HPE scale vendors to succeed. So maybe everyone wins.

But what about InteropNet?

The earliest memory I have of Interop, from my 2013 visit, was finding a pair of Nortel Passport (nee Avaya ERS) 8600 routing switches in the InteropNet network. InteropNet has been a demonstration platform that brought together a wide range of vendors including routing and switching, wireless, and software layers (monitoring and management in particular), and it was noticeably absent this year as well.

Part of this may be due to the smaller size of the Business Hall, but part is also due to the cost (time and money at least) of setting up and operating the multivendor environment. The absence of most of the enterprise network hardware vendors may also have played into it, although I don’t know if that was a cause or an effect. As fascinating as Extremo the Monkey was, I don’t think an all-Extreme Networks InteropNet would have really demonstrated interoperability that well.

I didn’t talk to any of the network vendors who weren’t there, but some of the software layer vendors were unabashedly disappointed by the loss of InteropNet. It’s one thing to show a video recording or demo over VPN back to a lab somewhere, but it’s a much more convincing story to show how your product or service would react to a real world environment that your prospective customer is a part of, at that moment.

There were a number of OEM/ODM type network (and server) manufacturers, as well as software-defined networking companies like Cumulus and 128 Networks, but I think at least one big name would have to be there to make InteropNet work. Two or three would make it even better.

One interesting thought to make InteropNet more interesting and practical would be for a hardware refurbisher or reseller to bring in gear from the big names and set it up. Whether it’s ServerMonkey or another vendor of that class, or even a broad spectrum integrator like Redapt, it would be a good way to show a less-than-bleeding-edge production-grade environment that might appeal more to the half of the attendees whose companies are smaller than 1000 people. It would be a great opportunity for companies like that to showcase their consulting and services offerings as well.

Looking into the rsts11 crystal ball…

I don’t remember any mention of venue for next year, but I would guess some rooms and locations would be tweaked to optimize MGM Grand for InteropITX 2018. It’s very convenient for economical rooms and minimal leaving-the-hotel-complex requirements for attendees.

The new tracks structure worked, for the most part, although I expect adjustment and evolution in the content. Don’t be surprised if more hands-on sessions come around. Even though wireless tech was in short supply in the Business Hall, it was very popular in the breakouts.

I’m not expecting the Monsters of IT to have a resurgence in 2018, although it might be a good thing if they did. More security, management and automation, and some surprising new startups, are more likely to find their way into the Business Hall.

Where do we go from here?

I was asked at Interop for suggestions on how to make InteropNet more practical next year. I had some ideas above, but I could use some help. Do you feel that it was an unfortunate omission, or were you more inclined toward “I wouldn’t say I was missing it, Bob” ??

We’ll have some more coverage in the next couple of weeks, including another update on NBase-T network technology (which made a much more substantial showing in terms of available-to-buy-today offerings this year), so stay tuned to our “interop” tag for the latest.

And of course, while it’s too early for me to apply for media credentials, it’s not too early to start thinking about InteropITX 2018.

Registration isn’t quite ready yet, but you can sign up to be notified (and get updates on submitting to present next year as well!). Click above or visit interop.com to join the notification list today!

Disclosure: I attend InteropITX as independent media, unrelated to and unaffiliated with my day job. Neither UBM/InteropITX nor any vendor covered have influence over or responsibility for any of my coverage.

Opengear switches things up at Interop ITX 2017

Opengear, established in 2004, is one of those companies whose products aren’t always visible, but tend to be there when you need them. Starting out with traditional serial console servers used to provide remote out-of-band management access, they’ve expanded that scope over the past 13 years to include monitoring software, Ethernet and cellular failover, centralized management of their appliances, and zero-touch provisioning from the systems themselves.

I sat down with Todd Rychecky, VP of Sales for the Americas business at Opengear, during Interop ITX 2017 in Las Vegas in mid-May to get a feel for how things have been going for Opengear lately.

Let’s just say they’re going well.

The business itself has been growing 50% year-over-year for the past 9 years–and Rychecky’s sales force has been expanding along with the company’s engineering team. This was impressive to hear, for a technology that many don’t even think about anymore. But following the needs for the technology is what has kept Opengear going for the past decade, driving the company to over 30% cellular deployment in 7 years of integrating mobile data networks.

Larger businesses with hyperscale infrastructure have been coming to Opengear for the highly resilient, centrally manageable SmartOOB devices they provide. Having bidirectional fault-tolerant connectivity options (including Ethernet, multiple cellular connections, and even POTS-based modems) helps in environments where reliable in-band connectivity may be more of a dream than a reality. And the growth in hyperscale infrastructure deployments has spread into traditional large enterprise.

They’ve also added to their industry expertise with the recent addition of CTO Marcio Saito, formerly CTO at Bay Area pioneer Cyclades (later acquired by Avocent). With sixteen years of adjacent experience, including the sorts of growth that Opengear is going through now, Saito looks to be an interesting accelerator for the business.

But what’s on the truck?

Opengear announced the newest member of the IM7200 Infrastructure Manager family, the IM7216-2-24E line of hybrid serial-and-Ethernet devices.

2017-05-17 16.22.32 Opengear New 7200 back

Featuring a 16-port serial console segment alongside a 24-port Gigabit Ethernet switch, the 24E products offer WiFi, v.92 modem (pictured here) or multi-carrier-friendly LTE, and copper/SFP gigabit uplink ports as well as dual power supplies.

2017-05-17 16.22.41 Opengear New 7200 Front

The IM7200 systems come with 16GB of internal flash storage, expandable via a pair of USB 3.0 ports on the front. If you’re using these boxes as install servers, which would be a great use for the Ethernet switch, you can set up your ISO or package repositories to serve up OS and configuration, from DCHP to deployment.

im7216-2-24u-dac-rear

For users wanting to manage the growing volume of devices with USB-based consoles, the IM7216-2-24U line offers 24 USB type A ports instead of the Ethernet switch. This product came out since Interop 2016 (around the time of Cisco Live US in 2016 to be precise), and I’m not sure how I missed it at that time. The 24U models offer Gigabit Ethernet uplinks, v.92 modem, WiFi, and optional multi-carrier LTE like the other models in the IM7200 product family.

So where do we go from here?

Opengear is continuing to go up, increasing their staffing, opening a new office in Silicon Valley to support hyperscale and enterprise businesses here, and finding new opportunities to supplement and replace legacy console servers going out of sale.

I’ll be putting one or two of their smaller devices through its paces this summer in the lab. Last year, Opengear provided me with a four port Resilience Gateway to explore, and back in January I mentioned that I’d be trying it out with Google’s Project Fi data-only SIM as well as Verizon for the cellular functionality. Enterprises are unlikely to use the Fi option, but home lab and POHO users may find it easier to implement than a Big Four cellular contract.

Be sure to catch up with Opengear at Cisco Live US in Las Vegas. They’ll be at booth 937 between the Collaboration and Cloud/Data Center villages, near the Cisco Live broadcast studio.

Have you found an interesting use for Opengear’s gear? Or are you out of the console world these days? Share your thoughts in the comments below.

Disclosure: I attend InteropITX as independent media, unrelated to and unaffiliated with my day job. Neither UBM/InteropITX nor Opengear have influence over or responsibility for any of my coverage.

Photos of the 24E device by Robert Novak (C)2017. Photo of the 24U device courtesy of Opengear.

First look: Checking out the “antsle” personal cloud server

Most of you know I don’t shy away from building (or refurbishing) my own computers. I used to draw the line at laptops, but in the last couple of years I’ve even rebuilt a few stripped-for-parts Dell and Toshiba laptops for the fun of it. Warped definition of “fun,” I’ll admit.

So when I saw a Facebook ad for a “cloud server” called “antsle,” I was curious but unconvinced. It was something like this:

The idea is you’re buying a compact, fanless, silent microserver that, in addition to some fault-tolerant hardware (mirrored SSD, ECC RAM), includes a proprietary user interface for managing and monitoring containers and virtual machines. You can cram up to 64GB of RAM in there, and while it only holds two internal drives, you can add more via USB 2.0 or USB 3.0, for up to 16TB of officially supported capacity. Not too bad, but I’ve been known to be cheap and/or resourceful, so I priced out a similar configuration assuming I’d build it myself.  Continue reading