资源行业动态开源开放 人工智能的“杭州试验”

开源开放 人工智能的“杭州试验”

2019-12-06 | |  92 |   0

原标题:开源开放 人工智能的“杭州试验”

来源:人工智能          链接:https://www.toutiao.com/a6766496463288533507/


近日,科技部正式复函浙江省人民政府,支持杭州市建设国家新一代人工智能创新发展试验区。杭州成为继北京、上海之后国家批复的新一批人工智能创新发展试验区。

1575639809604372.png


未来已至,杭州已经准备好,正围绕打造人工智能新高地的目标,采取先行先试举措、优化人工智能创新生态等措施,加快推动试验区的建设。


技术创新层面,支持浙江大学、之江实验室、西湖大学等高校院所、企业和相关机构加强人工智能领域的科研布局,打造开源开放的人工智能平台,优化人工智能新生态,服务创新创业。


于11月2日举办的2019 AIIA人工智能开发者大会上,“天枢人工智能开源开放平台”重磅发布。该平台由之江实验室牵头,联合浙江大学、北京一流科技、中国信息通信研究院、阿里巴巴等顶尖创新力量,共同研发打造业界领先的通用人工智能平台,构建人工智能生态朋友圈,赋能产业发展。


开源创新,共建产业生态


如何进一步释放人工智能的创新动能,软硬件创新、开源开放等都是亟待突破的瓶颈问题。据悉,“天枢”平台具有高性能计算框架、一站式全功能AI开发套件等核心优势,将大大提升人工智能技术的研发效率。


“要构建人工智能良性高效发展的创新生态,必须通过‘开源开放’来凝聚各方力量。”之江实验室副主任、“天枢”平台总架构师鲍虎军在会上表示。


据了解,“天枢”平台面向智能视觉、智能交通、智能金融、智慧城市、智能医疗、智能机器人等六大产业领域,联合了包括阿里云、蚂蚁金服、海康威视、商汤科技、格灵深瞳、建设银行、迅蚁科技、科沃斯等首批48家合作伙伴,共同构建AI生态朋友圈。生态伙伴将通过平台共建、开源创新等模式,推动人工智能技术向各行各业渗透应用。


当前,海量健康医疗数据积累,亟需医学人工智能提供新技术、新手段。“天枢”平台自研核心计算框架,使得海量数据处理效率变得更高,有力推动医学自然语言处理、医学图像处理等的研究。模型炼知、自动机器学习技术,将为智能医疗领域的模型训练提供更多便利,服务于医学研究和临床工作,最终造福于患者。再比如智慧城市领域,依托之江实验室的大数据优势,“天枢”平台还将在智慧城市建设中发力,赋能智慧城市的透彻感知、快速反应和科学决策。基于“天枢”平台内置联邦学习进行模型训练,AI研发者能在保护数据隐私的前提下,最大程度挖掘全局数据价值。


开放协同,打造AI“新姿态”


“天枢”平台的发布只是杭州注重人工智能创新发展的一个缩影。在科研、应用、产业等方面,杭州都具备较好的基础,具有研究基础全国领先、平台布局全国领先、人才集聚全国领先、产业发展全国领先、融合应用全国领先、双创生态全国领先的优势和特点。


截至目前,杭州的人工智能人才约占全国人才总数的6.5%,汇聚了潘云鹤等一批人工智能与信息科学领域具有权威性和影响力的科学家;有阿里巴巴、海康威视、大华股份、蚂蚁金服、网易等人工智能领域的典型企业和创业型公司500余家,拥有有效发明专利3000多件。


此外,杭州以开放协同的“新姿态”形成人工智能新布局。据介绍,杭州市将进一步推进城西科创大走廊、杭州高新区(滨江)、钱塘新区、萧山区、未来科技城人工智能小镇、杭州人工智能产业园、青山湖科技城微纳智造小镇等产业平台建设,打造全国人工智能产业集群引领区。


围绕打造人工智能新高地的目标,杭州将探索新一代人工智能发展的新路径新机制,努力形成可复制、可推广经验,在长三角一体化发展和带动全国人工智能创新发展方面发挥积极作用。力争到2030年,形成较为完备的基础设施、核心技术、先导产业、融合应用的创新创业生态体系,人工智能应用的广度和深度得到极大拓展,人工智能产业成为引领经济社会快速发展的主导产业,在若干领域达到国际领先水平。


人工智能产业沉浮六十年,产业化之路逐渐清晰,中国逐步走出了一条需求导向引领商业模式创新、市场应用倒逼基础理论和关键技术创新的独特发展路径。杭州,将在政经产学研用的共同赋能之下,积极营造人工智能生态环境,夯实整体产业基础,为我国人工智能产业未来的发展激发无限的想象空间。(撰稿/顾婷婷)

THE END

免责声明:本文来自互联网新闻客户端自媒体,不代表本网的观点和立场。

合作及投稿邮箱:E-mail:editor@tusaishared.com

Abstract When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover shortcomings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations
Abstract Semantic sparsity is a common challenge in structured visual classification problems; when the output space is complex, the vast majority of the possible predictions are rarely, if ever, seen in the training set. This paper studies semantic sparsity in situation recognition, the task of producing structured summaries of what is happening in images, including activities, objects and the roles objects play within the activity. For this problem, we find empirically that most substructures required for prediction are rare, and current state-of-the-art model performance dramatically decreases if even one such rare substructure exists in the target output.We avoid many such errors by (1) introducing a novel tensor composition function that learns to share examples across substructures more effectively and (2) semantically augmenting our training data with automatically gathered examples of rarely observed outputs using web data. When integrated within a complete CRF-based structured prediction model, the tensor-based approach outperforms existing state of the art by a relative improvement of 2.11% and 4.40% on top-5 verb and noun-role accuracy, respectively. Adding 5 million images with our semantic augmentation techniques gives further relative improvements of 6.23% and 9.57% on top-5 verb and noun-role accuracy
Abstract Semantic sparsity is a common challenge in structured visual classification problems; when the output space is complex, the vast majority of the possible predictions are rarely, if ever, seen in the training set. This paper studies semantic sparsity in situation recognition, the task of producing structured summaries of what is happening in images, including activities, objects and the roles objects play within the activity. For this problem, we find empirically that most substructures required for prediction are rare, and current state-of-the-art model performance dramatically decreases if even one such rare substructure exists in the target output.We avoid many such errors by (1) introducing a novel tensor composition function that learns to share examples across substructures more effectively and (2) semantically augmenting our training data with automatically gathered examples of rarely observed outputs using web data. When integrated within a complete CRF-based structured prediction model, the tensor-based approach outperforms existing state of the art by a relative improvement of 2.11% and 4.40% on top-5 verb and noun-role accuracy, respectively. Adding 5 million images with our semantic augmentation techniques gives further relative improvements of 6.23% and 9.57% on top-5 verb and noun-role accuracy

上一篇:细思极恐:受欢迎的人工智能产品,都具备这7个特征

下一篇:想让人工智能更“聪明”,大数据还得深加工

用户评价
全部评价

热门资源

  • 国内人才报告:机...

    近日,BOSS 直聘职业科学实验室 &BOSS 直聘研究院...

  • AI使物联网更智能...

    看到微软对物联网和人工智能的结合感兴趣是一个明...

  • 推荐一批学习自然...

    这里推荐一批学习自然语言处理相关的书籍,当然,...

  • 安防智能化大势下...

    大部分传统安防设备不仅拍摄视野有限,而且无法事...

  • 20亿创业基金、10...

    近日,杭州举办了建设国家新一代人工智能创新发展...