LBank Announces a Blockbuster Roadshow to Facilitate Web3 and Crypto Adoption

DUBAI, UAE / ACCESSWIRE / June 28, 2023 / LBank, a global digital asset trading platform, is set to embark on an exciting educational initiative to enhance cryptocurrency awareness in key regions worldwide. The ‘LBank Roadshow,’ a series of events, will be held in seven major cities, spanning June to August. As part of the broader LBank events, the LBank Roadshow focuses explicitly on fostering the development of the thriving blockchain and crypto ecosystems in India, Seoul, Bangkok, Manila, Dubai, Kuala Lumpur, Istanbul, and Tokyo. By empowering and inspiring younger generations to embrace Web3 and leverage crypto tools, this initiative seeks to propel the adoption and understanding of these transformative technologies.

Innovation serves as a fundamental driving force for any team or projects striving to maintain a competitive edge, particularly in the realm of Web3. Its significance cannot be underestimated. However, even with seminal ideas, their potential can remain unrealized if they aren’t shared with the appropriate audience. This is precisely where LBank steps in. Introducing the LBank Roadshow, a collection of complimentary one-day events designed to provide traders, crypto enthusiasts, developers, and builders worldwide with workshops and panel discussions. The Roadshow presents an exceptional opportunity to acquire fresh Web3 skills, gain visibility, forge valuable connections, and propel your innovative ventures to new heights.

LBank recognizes the vital role played by crypto enthusiasts in nurturing the web3 ecosystem. To cater to their needs and provide an enriching experience, we have curated a remarkable series of events. Immerse yourself in a range of captivating activities, such as enlightening educational sessions, stimulating panel discussions, captivating keynote speeches, and invaluable networking opportunities with fellow passionate individuals who share your interests.

LBank CEO, Allen Wei said, “We are committed to empowering the growth and prosperity of our listed projects, and our roadshow events provide an ideal platform to enhance their market impact. These events offer a unique opportunity for attendees to engage with potential investors, developers, partners, and users while immersing themselves in the thriving crypto communities of these cities.”

He further stated: “As we establish our global presence through the ‘Web3 Connect’ Roadshow events, we strive to lead the way in driving positive change within the crypto world.”

Registrations for the Roadshow will be strictly curated so that we assemble high-quality cohorts at each event. Registration also comes with an amazing affiliate program and promotional offers.

Promotional Offers
To expand its influence and gain more substantial support from the cryptocurrency community, LBank is introducing enticing incentives for traders on its platform. Users now have the chance to earn bonuses while also being invited to participate in an enriching affiliate program.

As a gesture of goodwill, LBank is granting new users the exclusive privilege of testing the platform with a live account that comes pre-funded with a registration bonus. Adding to the excitement, all new users will also receive a daily trading bonus. To claim this bonus, new users simply need to follow us.

Through LBank’s Affiliate Program, users have the potential to earn impressive commissions of up to 50% from direct traffic, including Maker Fees payments. Additionally, indirect traffic generated through sub-affiliates offers an extra 20% in commissions. This presents a remarkable opportunity for cryptocurrency enthusiasts to capitalize on one of the most outstanding affiliate programs available in the industry, enabling them to potentially earn a substantial income.

Community & Social Media: Telegram l Twitter l Facebook l Linkedin

Contact Details:

Abhinav Mehta
LBK Blockchain Co. Limited
media@lbank.info

SOURCE: LBank

Mattermost Lança “OpenOps” para Acelerar a Avaliação Responsável da IA Generativa Aplicada aos Fluxos de Trabalho

Sandbox de colaboração de bate-papo aprimorado por IA de código aberto acelera a avaliação de modelos de IA geradores e políticas de uso em fluxos de trabalho do mundo real, mantendo o controle total dos dados

Ian Tien, CEO da Mattermost, Inc.

Ian Tien, CEO da Mattermost, Inc., anuncia o lançamento da plataforma “OpenOps” para controle de IP e evitar bloqueio diante da aceleração dos fluxos de trabalho operacionais por IA 

PALO ALTO, Califórnia, June 28, 2023 (GLOBE NEWSWIRE) — Na 2023 Collision Conference, Mattermost, Inc., a plataforma de colaboração segura para equipes técnicas, anunciou o lançamento do “OpenOps”, uma abordagem de código aberto para acelerar a avaliação responsável de fluxos de trabalho e políticas de uso aprimorados por IA, mantendo o controle de dados e evitando o bloqueio de fornecedores.

O OpenOps surge na interseção da corrida para alavancar a IA para obter vantagem competitiva e a necessidade urgente de execução de operações confiáveis, incluindo desenvolvimento de políticas de uso e supervisão, e a garantia de controles de dados regulatórios e contratuais.

O objetivo é ajudar a eliminar os principais gargalos dessas preocupações críticas, permitindo que desenvolvedores e organizações hospedem um ambiente “sandbox” com controle total dos dados, para uma avaliação responsável dos benefícios e riscos de diferentes modelos de IA e políticas de uso em fluxos de trabalho de colaboração de bate-papo multiusuário do mundo real.

O sistema pode ser usado para avaliar LLMs auto hospedados listados no Hugging Face, incluindo Falcon LLM e GPT4All, quando o uso é otimizado para controle de dados, bem como modelos hiper escalados hospedados pelo fornecedor da plataforma Azure AI, OpenAI ChatGPT e Anthropic Claude, quando o uso é otimizado para desempenho.

A primeira versão da plataforma OpenOps permite a avaliação de uma série de casos de uso aumentados por IA, incluindo:

Mattermost OpenOps

Mattermost announces “OpenOps” framework to speed responsible evaluation of generative AI applications to real world workflows.

Perguntas e Respostas Automatizadas: Durante o trabalho colaborativo e individual, os usuários podem fazer perguntas a modelos de IA geradores, auto hospedados ou hospedados por fornecedores, para aprender sobre diferentes assuntos suportados pelo modelo.

Resumo da Discussão: Os resumos gerados pela IA podem ser criados a partir de discussões auto hospedadas e baseadas em bate-papo para acelerar os fluxos de informações e a tomada de decisões e reduzir o tempo e o custo necessários para que as organizações se mantenham atualizadas.

Interrogatório Contextual: Os usuários podem fazer perguntas de acompanhamento para resumos de tópicos gerados por bots de IA para saber mais sobre as informações subjacentes sem entrar nos dados brutos. Por exemplo, um resumo de discussão de um bot de IA sobre um determinado indivíduo que fez uma série de solicitações sobre questões de solução de problemas pode ser interrogado por meio do bot de IA para obter mais contexto sobre o motivo pelo qual o indivíduo fez as solicitações e como pretendia usar as informações.

Análise de Sentimentos: Os bots de IA podem analisar o sentimento das mensagens e isso pode ser usado para recomendar e fornecer reações de emoji nessas mensagens em nome de um usuário. Por exemplo, depois de detectar um sentimento de comemoração, um bot de IA pode adicionar uma reação de emoji “fogos” indicando a empolgação.

Coleta de Reforço de Aprendizado de Feedback Humano (RLHF): Para ajudar a avaliar e treinar modelos de IA, o sistema pode coletar feedback dos usuários sobre as respostas de diferentes prompts e modelos, registrando os sinais “polegares para cima/polegares para baixo” selecionados pelos usuários finais. Os dados podem ser usados no futuro para ajustar os modelos existentes, bem como fornecer informações para avaliar modelos alternativos de solicitações de usuários anteriores.

Essa estrutura auto hospedada de código aberto oferece uma “Arquitetura de IA e Operações Controladas pelo Cliente”, fornecendo um hub operacional para coordenação e automação com bots de IA conectados a back-ends generativos de IA e LLM intercambiáveis e auto hospedados de serviços, tal como o Hugging Face, que podem escalar para arquiteturas de nuvem privada e data center, bem como escalar para baixo para ser executado no laptop de um desenvolvedor para pesquisa e exploração. Por outro lado, ele também pode se conectar a modelos hiper escalados hospedados por fornecedores da plataforma de IA da Azure, bem como do OpenAI.

“Todas as organizações estão em uma corrida para definir como a IA deve acelerar sua vantagem competitiva”, disse o CEO da Mattermost, Ian Tien. “Criamos o OpenOps para ajudar as organizações a desbloquear com responsabilidade seu potencial com a capacidade de avaliar uma ampla gama de políticas de uso e modelos de IA da sua capacidade de acelerar os fluxos de trabalho internos em conjunto”.

A estrutura OpenOps recomenda uma abordagem de quatro fases para o desenvolvimento de aumentos de IA:

1 – Sandbox auto hospedado: Para que as equipes técnicas possam criar um ambiente “sandbox” auto hospedado como um espaço seguro com controle de dados e auditabilidade para explorar e demonstrar tecnologias de IA geradoras. O sandbox OpenOps pode incluir apenas colaboração de bate-papo multiusuário baseada na Web ou ser estendida para incluir aplicativos de desktop e móveis, integrações de diferentes ferramentas internas para simular um ambiente de produção, bem como integração com outros ambientes de colaboração, como canais específicos da Microsoft Teams.

2 – Estrutura de Controle de Dados: As equipes técnicas realizam uma avaliação inicial de diferentes modelos de IA em casos de uso internos e estabelecem um ponto de partida para políticas de uso que abrangem questões de controle de dados com diferentes modelos com base em se os modelos são auto hospedados ou hospedados pelo fornecedor, e em modelos hospedados pelo fornecedor com base em diferentes garantias de manuseio de dados. Por exemplo, as políticas de controle de dados podem variar desde o bloqueio completo de IAs hospedadas por fornecedores até o bloqueio do uso suspeito de dados confidenciais, como números de cartão de crédito ou chaves privadas, ou políticas personalizadas que podem ser codificadas no ambiente.

3 – Estrutura de Confiança, Segurança e Conformidade: As equipes de confiança, segurança e conformidade são convidadas a entrar no ambiente sandbox para observar e interagir com casos de uso iniciais aprimorados por IA e trabalhar com equipes técnicas para desenvolver políticas de uso e supervisão, além do controle dos dados. Por exemplo, a definição de diretrizes sobre se a IA pode ser usada para ajudar os gerentes a escrever avaliações de desempenho para suas equipes, ou se as técnicas de pesquisa para o desenvolvimento de software malicioso podem ser pesquisadas usando IA.

4 – Piloto e Produção: Uma vez que uma linha de base para políticas de uso e aprimoramentos iniciais de IA estejam disponíveis, um grupo piloto de usuários pode ser adicionado ao ambiente sandbox para avaliar os benefícios dos aumentos. As equipes técnicas podem iterar na adição de aumentos de fluxo de trabalho usando diferentes modelos de IA, enquanto as equipes de Confiança, Segurança e Conformidade podem monitorar o uso com total auditabilidade e iterar as políticas de uso e suas implementações. Com o amadurecimento do sistema piloto, o conjunto completo de aprimoramentos pode ser implantado em ambientes de produção que podem ser executados em uma versão de produção da estrutura OpenOps.

A estrutura do OpenOps inclui os seguintes recursos:

Hub Operacional Auto Hospedado: O OpenOps permite fluxos de trabalho operacionais auto hospedados em uma plataforma de mensagens em tempo real em toda a web, celular e desktop, a partir do projeto de código aberto Mattermost. Integrações com sistemas internos e ferramentas de desenvolvimento populares para ajudar a enriquecer os back-ends de IA com dados críticos e contextuais. A automação do fluxo de trabalho acelera os tempos de resposta enquanto reduz as taxas de erro e o risco.

Bots de IA com Back-Ends de IA Intercambiáveis: O OpenOps permite que os bots de IA sejam integrados às operações enquanto estão conectados a uma variedade intercambiável de plataformas de IA. Para o máximo controle dos dados, trabalhe com modelos LLM de código aberto auto hospedados, incluindo GPT4All e Falcon LLM de serviços, tal como o Hugging Face. Para obter o máximo desempenho, faça uso de estruturas de IA de terceiros, incluindo o OpenAI ChatGPT, a Plataforma de IA do Azure e o Anthropic Claude.

Controle Completo dos Dados: O OpenOps permite que as organizações hospedem, controlem e monitorem todos os dados, IP e tráfego de rede usando sua infraestrutura de segurança e conformidade existente. Isso permite que as organizações desenvolvam um rico corpus de dados de treinamento do mundo real para futura avaliação e ajuste fino de back-end de IA.

Gratuito e de Código Aberto: Disponível sob as licenças MIT e Apache 2, o OpenOps é um sistema gratuito e de código aberto, permitindo que as empresas implantem e executem facilmente toda a arquitetura.

Escalabilidade: O OpenOps oferece a flexibilidade da implantação em nuvens privadas, data centers ou até mesmo em um laptop padrão. O sistema também elimina a necessidade de hardware especializado, como GPUs, ampliando o número de desenvolvedores que podem explorar modelos de IA auto hospedados.

O framework OpenOps é experimental no momento e pode ser baixado em openops.mattermost.com.

Sobre a Mattermost

A Mattermost fornece um hub seguro e extensível para equipes técnicas e operacionais que precisam atender aos requisitos de segurança e confiança do estado-nação. Atendemos indústrias de tecnologia, setor público e defesa nacional com clientes que vão desde os gigantes da tecnologia até o Departamento de Defesa dos EUA e agências governamentais em todo o mundo.

Nossas ofertas auto hospedadas e em nuvem fornecem uma plataforma robusta para comunicação técnica em todo o fluxo de trabalho operacional de suporte de web, desktop e móvel, colaboração de incidentes, integração com Dev/Sec/Ops e cadeias de ferramentas internas, e conexão com uma ampla gama de plataformas de comunicações unificadas.

Executamos uma plataforma de código aberto verificada e implantada pelas organizações mais seguras e críticas do mundo, desenvolvida em conjunto com mais de 4.000 colaboradores de projetos de código aberto que forneceram mais de 30.000 melhorias de código para nossa visão de produto compartilhada e traduzida para 20 idiomas.

Para mais informação, visite www.mattermost.com.

Mattermost e o logotipo Mattermost são marcas comerciais registradas da Mattermost, Inc. Todas as outras marcas comerciais pertencem aos seus respectivos proprietários.

Contato com a Mídia:

Amy Nicol
Relações de Imprensa
Mattermost, Inc.

+1 (650) 667-8512
media@mattermost.com

Imagens:

Título: Ian Tien, CEO da Mattermost, Inc.

Legenda: Ian Tien, CEO da Mattermost, Inc., anuncia o lançamento da plataforma “OpenOps” para controle de IP e evitar bloqueio diante da aceleração dos fluxos de trabalho operacionais por IA

Imagem completa: https://www.dropbox.com/s/kn3eyxyd6eevsab/iantien_4000×2667.jpg?dl=0

Fotos deste comunicado podem ser encontradas em:

https://www.globenewswire.com/NewsRoom/AttachmentNg/d8db2abf-8b1b-4ed6-9f51-952fc8b97597/pt

https://www.globenewswire.com/NewsRoom/AttachmentNg/a4e2598d-4245-4ee9-91ff-895cf0afa68b

GlobeNewswire Distribution ID 8866193

Mattermost lance « OpenOps » pour accélérer l’évaluation responsable de l’IA générative appliquée aux workflows

Cet environnement de type bac à sable permettant une collaboration par t’chat open-source améliorée par l’IA accélère l’évaluation des politiques d’utilisation et modèles d’IA générative dans les workflows du monde réel tout en maintenant un contrôle total des données

Ian Tien, PDG de Mattermost, Inc.

Ian Tien, PDG de Mattermost, Inc., annonce la plateforme « OpenOps » pour le contrôle des adresses IP et éviter l’enfermement propriétaire alors que les workflows opérationnels sont de plus en plus accélérés par l’IA

PALO ALTO, Californie, 28 juin 2023 (GLOBE NEWSWIRE) — Lors de la Collision Conference 2023, Mattermost, Inc., la plateforme de collaboration sécurisée pour les équipes techniques, a annoncé le lancement d’« OpenOps », une approche open-source pour l’accélération de l’évaluation responsable des politiques d’utilisation et des workflows améliorés par l’IA qui permet aussi de maintenir le contrôle des données et d’éviter l’enfermement propriétaire.

OpenOps émerge à la croisée de la course pour tirer parti de l’IA afin d’obtenir un avantage concurrentiel et du besoin urgent de mener des opérations dignes de confiance, comprenant le développement de politiques d’utilisation et de surveillance et assurant des contrôles des données à la fois réglementaires et faisant l’objet d’une obligation contractuelle.

Cette plateforme vise à aider à dégager les goulots d’étranglement clés entre ces préoccupations cruciales en permettant aux développeurs et aux entreprises d’auto-héberger un environnement de type « bac à sable » avec un contrôle total des données pour évaluer de manière responsable les avantages et les risques de différents modèles d’IA et politiques d’utilisation sur des workflows de collaboration par t’chat multi-utilisateurs du monde réel.

Le système peut être utilisé pour évaluer des LLM auto-hébergés listés sur Hugging Face, dont Falcon LLM et GPT4All, lorsque l’utilisation est optimisée pour le contrôle des données, ainsi que les modèles hébergés par des fournisseurs en hyperscale depuis la plateforme Azure AI, OpenAI ChatGPT et Anthropic Claude lorsque l’utilisation est optimisée à des fins de performance.

Mattermost OpenOps

Mattermost announces “OpenOps” framework to speed responsible evaluation of generative AI applications to real world workflows.

La première version de la plateforme OpenOps permet l’évaluation d’une variété de cas d’utilisation améliorés par l’IA, notamment :

Questions et réponses automatisées : pendant un travail individuel comme collaboratif, les utilisateurs peuvent poser des questions à des modèles d’IA générative, qu’ils soient auto-hébergés ou hébergés par un fournisseur, afin d’en savoir plus sur différents sujets pris en charge par le modèle.

Récapitulation des discussions : des résumés peuvent être générés par l’IA à partir de discussions auto-hébergées basées sur le t’chat afin d’accélérer les flux d’informations et la prise de décisions, tout en réduisant le temps et le coût requis pour permettre aux entreprises de rester informées.

Interrogation contextuelle : les utilisateurs peuvent poser des questions de suivi à des résumés de fils de discussion générés par des bots IA pour en savoir plus sur les informations sous-jacentes sans aller dans les données brutes. Par exemple, un résumé de discussion d’un bot IA sur un certain individu faisant une série de demandes en vue d’un dépannage pourrait être interrogé via le bot IA afin d’avoir davantage de contexte sur la raison de ces demandes, et sur la manière dont il avait l’intention d’utiliser les informations.

Analyse des sentiments : les bots IA peuvent analyser le sentiment des messages, une fonction qui peut être utilisée pour recommander et fournir des réactions sous forme d’émoticônes à ces messages pour le compte d’un utilisateur. Par exemple, après avoir détecté un sentiment de réjouissance, un bot IA peut ajouter une réaction sous la forme d’une émoticône « flamme » indiquant de l’enthousiasme.

Collection d’apprentissages de renforcement basés sur le feedback humain (Reinforcement Learning from Human Feedback, RLHF) : pour aider à évaluer et former les modèles d’IA, le système peut recueillir le feedback des utilisateurs concernant les réponses provenant de différents modèles et saisies en enregistrant les signaux « pouce vers le haut/pouce vers le bas » sélectionnés par les utilisateurs. À l’avenir, les données pourront être utilisées pour peaufiner les modèles existants et s’avèreront aussi utiles pour évaluer des modèles alternatifs sur des saisies d’utilisateurs passées.

Ce framework open-source auto-hébergé offre une « architecture d’opérations et d’IA contrôlée par le client », fournissant un hub opérationnel pour la coordination et l’automatisation avec des bots IA connectés à des backends interchangeables auto-hébergés d’IA générative et de LLM de services comme Hugging Face. Par ailleurs, cette solution peut à la fois prendre de l’ampleur pour devenir une architecture de data center et de cloud privé, ou réduire son échelle pour fonctionner sur l’ordinateur portable d’un développeur à des fins de recherche et d’exploration. Dans le même temps, elle peut aussi se connecter aux modèles hébergés par des fournisseurs en hyperscale depuis la plateforme Azure AI ainsi qu’OpenAI.

« Chaque entreprise est dans la course pour définir de quelle manière l’IA accélère son avantage concurrentiel », a déclaré Ian Tien, PDG de Mattermost. « Nous avons créé OpenOps pour aider les entreprises à dévoiler leur potentiel de manière responsable avec la capacité d’évaluer une large gamme de politiques d’utilisation et de modèles d’IA tout en accélérant leurs workflows internes. »

Le framework OpenOps recommande une approche en quatre phases pour développer les améliorations par IA :

1 – Un environnement de type bac à sable auto-hébergé – Demandez à des équipes techniques de mettre en place un environnement de type « bac à sable » auto-hébergé en guise d’espace sûr avec un contrôle des données et une vérifiabilité afin d’explorer et de démontrer les technologies de l’IA générative. Le bac à sable OpenOps peut inclure simplement la collaboration Web par t’chat multi-utilisateurs ou être étendu pour inclure les applications sur ordinateur de bureau et appareil mobile, les intégrations à partir de différents outils internes pour simuler un environnement de production, ainsi que l’intégration à d’autres environnements de collaboration, comme des canaux Microsoft Teams spécifiques.

2 – Un framework de contrôle des données – Les équipes techniques mènent une évaluation initiale de différents modèles d’IA sur des cas d’utilisation internes, et définissent un point de départ pour les politiques d’utilisation couvrant les questions de contrôle des données avec différents modèles, auto-hébergés ou hébergés par des fournisseurs, et dans ce dernier cas, sur la base de différentes assurances de manipulation des données. Par exemple, les politiques de contrôle des données peuvent aller du blocage total des IA hébergées par des fournisseurs au blocage de l’utilisation soupçonnée de données sensibles comme des numéros de carte de crédit ou des clés privées, ou alors il peut s’agir de politiques personnalisées pouvant être encodées dans l’environnement.

3 – Un framework de confiance, de sécurité et de conformité – Les équipes de confiance, sécurité et conformité sont invitées dans l’environnement de type bac à sable afin d’observer et d’interagir avec les premiers cas d’utilisation améliorés par l’IA et travaillent avec les équipes techniques afin de développer des politiques d’utilisation et de surveillance en plus du contrôle des données. Par exemple, il peut s’agir de la création de lignes directrices afin de décider si l’IA peut être utilisée pour aider les responsables à rédiger des évaluations de performance pour leurs équipes, ou pour savoir si l’IA peut être employée afin de rechercher des techniques de développement de logiciels malveillants.

4 – Pilote et production – Une fois un référentiel des politiques d’utilisation et les premières améliorations apportées par l’IA disponibles, un groupe d’utilisateurs pilotes peut être ajouté à l’environnement de type bac à sable pour évaluer les avantages de ces améliorations. Les équipes techniques peuvent répéter l’ajout d’améliorations de workflows à l’aide de différents modèles d’IA tandis que les équipes de confiance, sécurité et conformité peuvent surveiller l’utilisation avec une vérifiabilité totale et répéter les politiques d’utilisation et leurs mises en œuvre. À mesure que le système pilote mûrit, l’ensemble complet d’améliorations peut être déployé dans des environnements de production pouvant fonctionner sur une version mise en production du framework OpenOps.

Le framework OpenOps inclut les capacités suivantes :

Un hub opérationnel auto-hébergé : OpenOps permet des workflows opérationnels auto-hébergés sur une plateforme de messagerie en temps réel à travers les systèmes Web, mobiles et de bureau à partir du projet open-source Mattermost. Les intégrations aux systèmes internes et aux outils de développeurs populaires aident à enrichir les backends d’IA avec des données contextuelles cruciales. L’automatisation des workflows accélère les temps de réponse tout en réduisant les taux d’erreur et le risque.

Des bots IA avec des backends IA interchangeables : OpenOps permet aux bots IA d’être intégrés aux opérations tandis qu’ils sont connectés à une palette interchangeable de plateformes IA. Pour un contrôle maximal des données, travaillez avec des modèles LLM open-source auto-hébergés comprenant GPT4All et Falcon LLM de services comme Hugging Face. Pour des performances optimales, puisez dans le frameworking IA tiers comprenant OpenAI ChatGPT, la plateforme Azure AI et Anthropic Claude.

Contrôle total des données : OpenOps permet aux entreprises d’auto-héberger, contrôler et surveiller toutes les données, toutes les adresses IP et tout le trafic réseau au moyen de son infrastructure de sécurité et de conformité existante. Cela permet aux entreprises de développer un corpus riche de données de formation du monde réel pour l’évaluation et le peaufinage futurs de backends IA.

Gratuit et open-source : disponible sous les licences MIT et Apache 2, OpenOps est un système open-source gratuit permettant aux entreprises de déployer et d’exécuter l’architecture complète en toute simplicité.

Évolutivité : OpenOps offre la flexibilité d’un déploiement sur des clouds privés, des data centers ou même un ordinateur portable standard. Le système supprime aussi la nécessité de matériel spécialisé comme des processeurs graphiques, étendant le nombre de développeurs pouvant explorer les modèles d’IA auto-hébergés.

Le framework OpenOps est actuellement expérimental et peut être téléchargé sur openops.mattermost.com.

À propos de Mattermost

Mattermost fournit un hub extensible et sûr aux équipes techniques et opérationnelles qui doivent répondre à des exigences de confiance et de sécurité au niveau d’un État-nation. Nous desservons les industries de la technologie, du secteur public et de la défense nationale avec des clients allant des géants technologiques au Département de la défense des États-Unis en passant par des agences gouvernementales du monde entier.

Nos offres cloud auto-hébergées fournissent une plateforme robuste pour la communication technique à travers les systèmes Web, de bureau et mobiles prenant en charge les workflows opérationnels, la collaboration sur les incidents, l’intégration au Dev/Sec/Ops et aux chaînes de compilation internes et la connexion avec une large gamme de plateformes de communications unifiées.

Nous fonctionnons avec une plateforme open-source approuvée et déployée par les organisations les plus cruciales et sûres du monde, qui est co-construite avec plus de 4 000 contributeurs de projets open-source ayant fourni plus de 30 000 améliorations de codes en faveur de notre vision partagée des produits, et traduite en 20 langues.

Pour en savoir plus, veuillez consulter le site www.mattermost.com.

Mattermost et le logo Mattermost sont des marques commerciales déposées de Mattermost, Inc. Toutes les autres marques commerciales sont la propriété de leurs détenteurs respectifs.

Contact auprès des médias :

Amy Nicol
Relations avec la presse
Mattermost, Inc.

+1 (650) 667-8512
media@mattermost.com

Images :

Titre : Ian Tien, PDG de Mattermost, Inc.

Légende : Ian Tien, PDG de Mattermost, Inc., annonce la plateforme « OpenOps » pour le contrôle des adresses IP et éviter l’enfermement propriétaire alors que les workflows opérationnels sont de plus en plus accélérés par l’IA

Image complète : https://www.dropbox.com/s/kn3eyxyd6eevsab/iantien_4000x2667.jpg?dl=0

Des photos accompagnant ce communiqué sont disponibles aux adresses suivantes :

https://www.globenewswire.com/NewsRoom/AttachmentNg/d8db2abf-8b1b-4ed6-9f51-952fc8b97597/fr

https://www.globenewswire.com/NewsRoom/AttachmentNg/a4e2598d-4245-4ee9-91ff-895cf0afa68b

GlobeNewswire Distribution ID 8866193

Mattermost Introduces “OpenOps” to Speed Responsible Evaluation of Generative AI Applied to Workflows

Ian Tien, CEO of Mattermost, Inc.

Ian Tien, CEO of Mattermost, Inc., announces “OpenOps” platform for Controlling IP and Avoiding Lock-In as Operational Workflows become AI-Accelerated

Open source AI-enhanced chat collaboration sandbox accelerates evaluation of generative AI models and usage policies in real world workflows while maintaining full data control

PALO ALTO, Calif., June 28, 2023 (GLOBE NEWSWIRE) — At the 2023 Collision Conference, Mattermost, Inc., the secure collaboration platform for technical teams, announced the launch of “OpenOps”, an open-source approach to accelerating the responsible evaluation of AI-enhanced workflows and usage policies while maintaining data control and avoiding vendor lock-in.

OpenOps emerges at the intersection of the race to leverage AI for competitive advantage and the urgent need to run trustworthy operations, including the development of usage and oversight policies and ensuring regulatory and contractually-obligated data controls.

It aims to help clear key bottlenecks between these critical concerns by enabling developers and organizations to self-host a “sandbox” environment with full data control to responsibly evaluate the benefits and risks of different AI models and usage policies on real-world, multi-user chat collaboration workflows.

The system can be used to evaluate self-hosted LLMs listed on Hugging Face, including Falcon LLM and GPT4All, when usage is optimized for data control, as well as hyperscaled, vendor-hosted models from the Azure AI platform, OpenAI ChatGPT and Anthropic Claude when usage is optimized for performance.

The first release of the OpenOps platform enables evaluation of a range of AI-augmented use cases including:

Automated Question and Answer: During collaborative and individual work users can ask questions to generative AI models, either self-hosted or vendor-hosted, to learn about different subject matters the model supports.

Mattermost OpenOps

Mattermost announces “OpenOps” framework to speed responsible evaluation of generative AI applications to real world workflows.

Discussion Summarization: AI-generated summaries can be created from self-hosted, chat-based discussions to accelerate information flows and decision-making while reducing the time and cost required for organizations to stay up-to-date.

Contextual Interrogation: Users can ask follow-up questions to thread summaries generated by AI bots to learn more about the underlying information without going into the raw data. For example, a discussion summary from an AI bot about a certain individual making a series of requests about troubleshooting issues could be interrogated via the AI bot for more context on why the individual made the requests and how they intended to use the information.

Sentiment Analysis: AI bots can analyze the sentiment of messages, which can be used to recommend and deliver emoji reactions on those messages on a user’s behalf. For example, after detecting a celebratory sentiment an AI bot may add a “fire” emoji reaction indicating excitement.

Reinforcement Learning from Human Feedback (RLHF) Collection: To help evaluate and train AI models, the system can collect feedback from users on responses from different prompts and models by recording the “thumbs up/thumbs down” signals end users select. The data can be used in future to both fine tune existing models, as well as providing input for evaluating alternate models on past user prompts.

This open source, self-hosted framework offers a “Customer-Controlled Operations and AI Architecture,” providing an operational hub for coordination and automation with AI bots connected to interchangeable, self-hosted Generative AI and LLM backends from services like Hugging Face that can scale up to private cloud and data center architectures, as well as scale down to run on a developer’s laptop for research and exploration. At the same time, it can also connect to hyperscaled, vendor-hosted models from the Azure AI platform as well as OpenAI.

“Every organization is in a race to define how AI accelerates their competitive advantage,” says Mattermost CEO, Ian Tien, “We created OpenOps to help organizations responsibly unlock their potential with the ability to evaluate a broad range of usage policies and AI models in their ability to accelerate in-house workflows in concert.”

The OpenOps framework recommends a four phase approach to developing AI-augmentations:

1 – Self-Hosted Sandbox – Have technical teams set up a self-hosted “sandbox” environment as a safe space with data control and auditability to explore and demonstrate Generative AI technologies. The OpenOps sandbox can include just web-based multi-user chat collaboration, or be extended to include desktop and mobile applications, integrations from different in-house tools to simulate a production environment, as well as integration with other collaboration environments, such as specific Microsoft Teams channels.

2 – Data Control Framework – Technical teams conduct an initial evaluation of different AI models on in-house use cases, and setting a starting point for usage policies covering data control issues with different models based on whether models are self-hosted or vendor-hosted, and in vendor-hosted models based on different data handling assurances. For example, data control policies could range from completely blocking vendor-hosted AIs, to blocking the suspected use of sensitive data such as credit card numbers or private keys, or custom policies that can be encoded into the environment.

3 – Trust, Safety and Compliance Framework – Trust, safety and compliance teams are invited into the sandbox environment to observe and interact with initial AI-enhanced use cases and work with technical teams to develop usage and oversight policies in addition to data control. For example, setting guidelines on whether AI can be used to help managers write performance evaluations for their teams, or whether researching techniques for developing malicious software can be researched using AI.

4 – Pilot and Production – Once a baseline for usage policies and initial AI-enhancements are available, a group of pilot users can be added to the sandbox environment to assess the benefits of the augmentations. Technical teams can iterate on adding workflow augmentations using different AI models while Trust, Safety and Compliance teams can monitor usage with full auditability and iterate on usage policies and their implementations. As the pilot system matures, the full set of enhancements can be deployed to production environments that can run on a production-ized version of the OpenOps framework.

The OpenOps framework includes the following capabilities:

Self-Hosted Operational Hub: OpenOps allows for self-hosted operational workflows on a real-time messaging platform across web, mobile and desktop from the Mattermost open-source project. Integrations with in-house systems and popular developer tools to help enrich AI backends with critical, contextual data. Workflow automation accelerates response times while reducing error rates and risk.

AI Bots with Interchangeable AI Backends: OpenOps enables AI bots to be integrated into operations while connected to an interchangeable array of AI platforms. For maximum data control, work with self-hosted, open-source LLM models including GPT4All and Falcon LLM from services like Hugging Face. For maximum performance, tap into third-party AI frameworking including OpenAI ChatGPT, the Azure AI Platform and Anthropic Claude.

Full Data Control: OpenOps enables organizations to self-host, control, and monitor all data, IP, and network traffic using their existing security and compliance infrastructure. This allows organizations to develop a rich corpus of real-world training data for future AI backend evaluation and fine-tuning.

Free and Open Source: Available under the MIT and Apache 2 licenses, OpenOps is a free, open-source system, enabling enterprises to easily deploy and run the complete architecture.

Scalability: OpenOps offers the flexibility to deploy on private clouds, data centers, or even a standard laptop. The system also removes the need for specialized hardware such as GPUs, broadening the number of developers who can explore self-hosted AI models.

The OpenOps framework is currently experimental and can be downloaded from openops.mattermost.com.

About Mattermost

Mattermost provides a secure, extensible hub for technical and operational teams that need to meet nation-state-level security and trust requirements. We serve technology, public sector, and national defense industries with customers ranging from tech giants to the U.S. Department of Defense to governmental agencies around the world.

Our self-hosted and cloud offerings provide a robust platform for technical communication across web, desktop and mobile supporting operational workflow, incident collaboration, integration with Dev/Sec/Ops and in-house toolchains and connecting with a broad range of unified communications platforms.

We run on an open source platform vetted and deployed by the world’s most secure and mission critical organizations, that is co-built with over 4,000 open source project contributors who’ve provided over 30,000 code improvements towards our shared product vision, which is translated into 20 languages.

To learn more, visit www.mattermost.com.

Mattermost and the Mattermost logo are registered trademarks of Mattermost, Inc. All other trademarks are the property of their respective owners.

Media Contact:

Amy Nicol
Press Relations
Mattermost, Inc.

+1 (650) 667-8512
media@mattermost.com

Images:

Title: Ian Tien, CEO of Mattermost, Inc.

Caption: Ian Tien, CEO of Mattermost, Inc., announces “OpenOps” platform for Controlling IP and Avoiding Lock-In as Operational Workflows become AI-Accelerated

Full image: https://www.dropbox.com/s/kn3eyxyd6eevsab/iantien_4000×2667.jpg?dl=0

Photos accompanying this announcement are available at
https://www.globenewswire.com/NewsRoom/AttachmentNg/d8db2abf-8b1b-4ed6-9f51-952fc8b97597

https://www.globenewswire.com/NewsRoom/AttachmentNg/a4e2598d-4245-4ee9-91ff-895cf0afa68b

GlobeNewswire Distribution ID 8865905

WilsonHCG named a Leader and a Star Performer in Everest Group’s 2023 Global RPO Services PEAK Matrix® Assessment

Everest Group RPO Services PEAK Matrix® Assessment 2023 – Global

Everest Group RPO Services PEAK Matrix® Assessment 2023 – Global

TAMPA, Fla., June 28, 2023 (GLOBE NEWSWIRE) — WilsonHCG has been named a Leader and a Star Performer once again in Everest Group’s annual Global Recruitment Process Outsourcing (RPO) Services PEAK Matrix® Assessment.

The PEAK Matrix® analyzes the changing dynamics of the RPO landscape, providing an objective, data-driven comparative assessment of more than 45 RPO providers based on their overall capability across different global services markets.

“We’re honored to be named a Leader and a Star Performer yet again. Our people go above and beyond every day to help our clients’ businesses get better — their dedication to excellence is critical in today’s rapidly evolving talent landscape,” said John Wilson, CEO at WilsonHCG. “We’re also proud of our position as a Major Contender in APAC, as this is a region that we’ve continued to expand in over the past 12 months.”

Commenting on WilsonHCG’s global status as a Leader and a Star Performance, Arkadev Basak, Partner, Everest Group, said: “Along with its deep expertise in sourcing niche high-skilled roles, WilsonHCG stands out due to its global footprint and analytical offerings. Its acquisition of Claro and Tracking Talent has fortified its service offerings and helped position WilsonHCG as a Leader and a Star performer on Everest Group’s Recruitment Process Outsourcing (RPO) Services PEAK Matrix® Assessment 2023 – Global.”

WilsonHCG’s strong track record for hiring high-skill white collar candidates and its doubling down on the healthcare and life sciences (HLS) space was commended by Everest Group, as was its significant delivery capability in North America and strong presence in EMEA.

Other highlights of the assessment include:

  • How WilsonHCG’s network of global delivery centers support multiple buyer industries.
  • The company’s vast web of partnerships with technology vendors.
  • Its acquisition of Claro to provide a market-leading offering for talent market intelligence.

WilsonHCG was also named a Leader and a Star Performer in North America, a Major Contender and a Star Performer in EMEA and a Major Contender in APAC.

Basak continued: “WilsonHCG is a key player in North America due to its strong delivery capabilities and ability to hire niche white collar roles particularly in high-tech, healthcare and life sciences. Its string of organic and inorganic investments to increase market penetration, improve technological capabilities and advisory offerings has helped in its positioning as a Leader and a Star Performer on Everest Group’s Recruitment Process Outsourcing (RPO) Services PEAK Matrix® Assessment 2023 – North America.”

To learn more about the PEAK Matrix® please visit the Everest Group website.

About WilsonHCG

WilsonHCG is an award-winning, global leader in total talent solutions. Operating as a strategic partner, it helps some of the world’s most admired brands build comprehensive talent functions. With a global presence spanning more than 65 countries and six continents, WilsonHCG provides a full suite of configurable talent services including recruitment process outsourcing (RPO), executive search, contingent talent solutions and technology advisory.

TALENT. ™ It’s more than a solution; it’s who we are.

www.wilsonhcg.com

Media contact
Kirsty Hewitt
+44 7889901517
kirsty.hewitt@wilsonhcg.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/4e63ef5e-761d-4037-b8b5-adce40d52acd

GlobeNewswire Distribution ID 8866171

ATP Electronics Launches Industrial 176-Layer PCIe® Gen 4 x4 M.2, U.2 SSDs Offering Excellent R/W Performance, 7.68 TB Highest Capacity

Fastest PCIe Generation Doubles Gen 3 Data Rate, Cuts Latency, Offers Excellent R/W Performance

ATP_Gen 4 M.2 2280 NVMe

ATP Electronics Launches Industrial 176-Layer PCIe Gen 4 x4 M.2 U.2 SSDs Offering Excellent RW Performance 7.68 TB Highest Capacity

TAIPEI, Taiwan, June 28, 2023 (GLOBE NEWSWIRE) — ATP Electronics, the global leader in specialized storage and memory solutions, introduces its latest high-speed N600 Series M.2 2280 and U.2 solid state drives (SSDs) sporting the 4th generation PCIe® interface and supporting the NVMe™ protocol. The new ATP PCIe Gen 4 SSDs’ 16 GT/s data rate is double that of the previous generation, translating to a bandwidth of 2 GB/s for every PCIe lane.

ATP_Gen 4 M.2 2280 NVMe

ATP Electronics Launches Industrial 176-Layer PCIe Gen 4 x4 M.2 U.2 SSDs Offering Excellent RW Performance 7.68 TB Highest Capacity

Using x4 lanes, these SSDs have a maximum bandwidth of 8 GB/s, meeting the growing need for high-speed data transfer in today’s demanding applications and making them suitable for both read/write-intensive, mission-critical industrial applications such as networking/server, 5G, data logging, surveillance, and imaging, with performance on par, if not better, than mainstream PCIe Gen 4 consumer SSDs in the market.

ATP-Gen4-U.2-NVMe_N601Sc

ATP Electronics Launches Industrial 176-Layer PCIe Gen 4 x4 M.2 U.2 SSDs Offering Excellent RW Performance 7.68 TB Highest Capacity

176-Layer NAND Flash, Onboard DRAM Offer Exceptional QoS,
Lower Cost per GB with Prime 512 Gbit Die Package

The N600 Series is built on innovative 176-layer 3D NAND flash and uses prime 512 Gbit die package to deliver not only performance improvements over the 64-layer technology, but also price improvements resulting in lower cost per GB.

The M.2 2280 SSDs are available in capacities from 240 GB up to 3.84 TB, while the U.2 SSDs are available from 960 GB to 7.68 TB for more cost-effective options for diverse storage requirements.

ATP_M.2-NVMe_2021_Fin-Type

ATP Electronics Launches Industrial 176-Layer PCIe Gen 4 x4 M.2 U.2 SSDs Offering Excellent RW Performance 7.68 TB Highest Capacity

With an outstanding Quality of Service (QoS) rating compared with the previous generation, the N600 Series offers optimal consistency and predictability with higher read/write performance, high IOPS, low write amplification index (WAI), and low latency, thanks to its onboard DRAM. The onboard DRAM delivers higher sustained performance over long periods of operation compared with DRAM-less solutions.

Future-Ready, Long-Term Supply Support

Maximizing SSD lifespan, as well as the availability of replacement units long after similar consumer-grade counterparts have stopped production, is important for business to get the most out of their investments. This is why ATP Electronics is committed to longevity support.

ATP-Gen4-U.2-NVMe_N651Si_7.68TB

ATP Electronics Launches Industrial 176-Layer PCIe Gen 4 x4 M.2 U.2 SSDs Offering Excellent RW Performance 7.68 TB Highest Capacity

“We are thrilled to introduce this new product line based on 176-layer triple level cell (TLC) NAND flash. While there are newer iterations of NAND being released in 2XX+layers , these will focus on 1 Tbit and larger density sizes. The 176-layer 3D TLC NAND in 512 Gbit density remains the sweet spot die density for many embedded and specialty applications given their ongoing need for mid and lower SSD device densities.

“Besides a competitive price position, this generation will offer latency improvements and reliability improvements at all temperature ranges. Perhaps even more important to our customer base, this generation will offer product longevity for the foreseeable future. We can work in confidence with our customer base often needing product longevity planning 5 plus years,” said Jeff Hsieh, ATP Electronics President and Chief Executive Officer.

Gen-4-x4-M.2_U.2-SSDs

ATP Electronics Launches Industrial 176-Layer PCIe Gen 4 x4 M.2 U.2 SSDs Offering Excellent RW Performance 7.68 TB Highest Capacity

Reliable and Secure Operation
The N600 Series offers a host of reliability, security, and data integrity features, such as:

  • End-to-end data protection, TRIM function support, and LDPC error correction
  • Anti-sulfur resistors repel the damaging effects of sulfur contamination, guaranteeing continued dependable operation even in environments with high sulfur content
  • Hardware-based AES 256-bit encryption and optional TCG Opal 2.0/ IEEE 1667 security for self-encrypting drive (SED)
  • N600Sc Series offers reliable operation in varying temperature shifts with C-Temp (0℃ to 70℃) rating. I-Temp operable (-40℃ to 85℃) N600Si Series will be available for later release.
  • Thermal throttling intelligently adjusts the workload per operating unit time. Throttling stages are pre-configured, allowing the controller to effectively manage heat generation to keep the SSD cool. This ensures stable sustained performance and prevents the heat from damaging the device. Heatsink options are available by project and according to customer request.
  • Power loss protection (PLP) Mechanism. The N600 Series U.2 and upcoming I-Temp rated M.2 2280 SSDs feature hardware-based PLP. Onboard capacitors hold up power long enough to ensure that the last read/write/erase command is completed, and data is stored safely in the non-volatile flash memory. The microcontroller unit (MCU)-based design allows the PLP array to perform intelligently in various temperatures, power glitches, and charge states to protect both device and data. C-Temp rated M.2 2280 SSDs, on the other hand, feature a firmware-based PLP, which effectively protects data that had been written to the device prior to power loss.

Mission-Critical Applications: We Build With You
Depending on project support and customer request, ATP can provide hardware/firmware customization, thermal solutions customization, and engineering joint validation and collaboration.

To ensure design reliability for mission-critical applications, ATP performs extensive testing, comprehensive design/product characterization and specifications validation, and customized testing in mass production (MP) stage, such as burn-in, power cycling, specific testing scripts, and more.

Product Highlights

PCIe® Gen 4 NVMe M.2 2280 PCIe® Gen 4 NVMe U.2
Capacities 240 GB to 3.84 TB 960 GB to 7.68 TB
Operating Temp C-Temp (0°C to 70°C): N600Sc
I-Temp (-40°C to 85°C): N600Si (upcoming)
Thermal Management for Optimal Heat Dissipation •  Nickel-coated copper heat spreader
•  4 mm or 8 mm fin-type heatsink design
15 mm fin-type heatsink design
Security AES 256-bit encryption
TCG Opal 2.0
Data Integrity End-to-End data path protection
Performance (Read/Write up to) 6,450/6,050 MB/s 6,000/5,500 MB/s
Others Hot-swappable

*By Project Support

For more information on ATP’s N600 Series PCIe Gen 4 x4 M.2 SSDs, visit:
https://www.atpinc.com/products/industrial-gen4-nvme-M.2-ssd
For more information on ATP’s N600 Series PCIe Gen 4 x4 U.2 SSDs, visit:
https://www.atpinc.com/products/industrial-gen4-U.2-ssd

Media Contact on the Press Release: Kelly Lin (Kellylin@tw.atpinc.com)
Follow ATP Electronics on LinkedIn: https://www.linkedin.com/company/atp-electronics

About ATP
ATP Electronics (“ATP”) has dedicated 30 years of manufacturing excellence as the premier provider of memory and NAND flash storage products for rigorous embedded/industrial/automotive applications. As the “Global Leader in Specialized Storage and Memory Solutions,” ATP is known for its expertise in thermal and high-endurance solutions. ATP is committed to delivering add-on value, differentiation and best TCO for customers. A true manufacturer, ATP manages every stage of the manufacturing process to ensure quality and product longevity. ATP upholds the highest standards of corporate social responsibility by ensuring sustainable value for workers, the environment, and business throughout the global supply chain. For more information on ATP Electronics, please visit www.atpinc.com or contact us at info@atpinc.com.

Photos accompanying this announcement are available at:

https://www.globenewswire.com/NewsRoom/AttachmentNg/56be0635-5027-442b-b9e7-c64c43e353d7

https://www.globenewswire.com/NewsRoom/AttachmentNg/3f0e3129-6f53-4366-b5bc-fd0cdc86f5a1

https://www.globenewswire.com/NewsRoom/AttachmentNg/63c5d538-09c1-4481-a188-e66cc487ecc9

https://www.globenewswire.com/NewsRoom/AttachmentNg/512ea038-214a-4b27-bbf4-adf8fa8a3fac

https://www.globenewswire.com/NewsRoom/AttachmentNg/400709dd-4d79-4e2a-ad76-e3bc7266f7a5

https://www.globenewswire.com/NewsRoom/AttachmentNg/933e1523-0df5-485d-9a11-e130864b199b

GlobeNewswire Distribution ID 8863001