日韩欧美极品在线观看_欧美成人亚洲成人_亚洲视频在线观看_日韩欧美在线看

<tr><label></label></tr>

    <sup id="P80hT"></sup>

      1. <noframes>
      2. <sup><tt></tt></sup>
        1. Welcome to Shenzhen Deren Manufacturing Co.Ltd
          Deren Precision Manuracturing Co.Ltd
          Focus on Custom Parts and Industrial Blades.
          Fine products,Craftmen service,10 years precision manufacturing.
          15814001449
          Hotline&Wechat

          News

          Contact Us

          You are here:Home >> News >> Industry information...

          Industry information

          Sora come out. Ai text to vedio bright our eyes.

          Time:2024-02-21 Views:13594
          1、 Introduction to Sora‘s Concept
          On February 16, 2024, OpenAI released the large modeling tool for text to video, Sora (using natural language to describe and generate videos). Once this news was released, global social media platforms and the entire world were once again shocked by OpenAI. The height of AI videos has been suddenly raised by Sora. It should be noted that cultural video tools such as Runway Pika are still breaking through the coherence of a few seconds, while Sora can directly generate a 60 second long one shot to the end video. It should be noted that Sora has not yet officially released, so this effect can already be achieved.
          The name Sora comes from the Japanese word for "sky" (そら sora), meaning "sky", to indicate its infinite creative potential.
          The advantage of Sora compared to the aforementioned AI video models is that it can accurately present details, understand the existence of objects in the physical world, and generate characters with rich emotions. Even this model can generate videos based on prompts, still images, and even fill in missing frames in existing videos.
          2、 The implementation path of Sora
          The significance of Sora lies in its once again pushing AIGC‘s upper limit in AI driven content creation. Prior to this, text models such as ChatGPT had already begun to assist in content creation, including the generation of illustrations and visuals, and even the use of virtual humans to create short videos. Sora, on the other hand, is a large model that focuses on video generation. By inputting text or images, videos can be edited in various ways, including generation, connection, and expansion. It belongs to the category of multimodal large models. This type of model has been extended and expanded on the basis of language models such as GPT.
          Sora uses a method similar to GPT-4 to manipulate text tokens to process video patches. The key innovation lies in treating video frames as patch sequences, similar to word tokens in language models, enabling them to effectively manage various video information. By combining text conditions, Sora is able to generate contextually relevant and visually coherent videos based on text prompts.
          In principle, Sora mainly achieves video training through three steps. Firstly, there is a video compression network that reduces the dimensionality of videos or images into a compact and efficient form. Next is spatiotemporal patch extraction, which decomposes the view information into smaller units, each containing a portion of the spatial and temporal information in the view, so that Sora can perform targeted processing in subsequent steps. Finally, video generation is achieved by decoding and encoding input text or images, and the Transformer model (i.e. ChatGPT basic converter) decides how to convert or combine these units to form the complete video content.
          Overall, the emergence of Sora will further promote the development of AI video generation and multimodal large models, bringing new possibilities to the field of content creation.
          3、 Sora‘s 6 Advantages
          The Daily Economic News reporter sorted out the report and summarized six advantages of Sora:
          (1) Accuracy and diversity: Sora can convert short text descriptions into high-definition videos that grow up to 1 minute. It can accurately interpret the text input provided by users and generate high-quality video clips with various scenes and characters. It covers a wide range of themes, from characters and animals to lush landscapes, urban scenes, gardens, and even underwater New York City, providing diverse content according to user requirements. According to Medium, Sora can accurately explain long prompts of up to 135 words.
          (2) Powerful language understanding: OpenAI utilizes the recapping technique of the Dall · E model to generate descriptive subtitles for visual training data, which not only improves the accuracy of the text but also enhances the overall quality of the video. In addition, similar to DALL · E 3, OpenAI also utilizes GPT technology to convert short user prompts into longer detailed translations and send them to video models. This enables Sora to accurately generate high-quality videos according to user prompts.
          (3) Generate videos from images/videos: Sora can not only convert text into videos, but also accept other types of input prompts, such as existing images or videos. This enables Sora to perform a wide range of image and video editing tasks, such as creating perfect loop videos, converting static images into animations, and expanding videos forward or backward. OpenAI presented a demo video generated from images based on DALL · E 2 and DALL · E 3 in the report. This not only proves Sora‘s powerful capabilities, but also demonstrates its infinite potential in the fields of image and video editing.
          (4) Video extension function: Due to the ability to accept diverse input prompts, users can create videos based on images or supplement existing videos. As a Transformer based diffusion model, Sora can also expand videos forward or backward along the timeline.
          (5) Excellent device compatibility: Sora has excellent sampling capabilities, ranging from 1920x1080p in widescreen to 1080x1920 in portrait, and can easily handle any video size between the two. This means that Sora can generate content that perfectly matches its original aspect ratio for various devices. Before generating high-resolution content, Sora can quickly create content prototypes at a small size.
          (6) Consistency and continuity between scenes and objects: Sora can generate videos with dynamic perspective changes, and the movement of characters and scene elements in three-dimensional space appears more natural. Sora is able to handle occlusion issues well. One problem with existing models is that when objects leave the field of view, they may not be able to track them. By providing multiple frame predictions at once, Sora ensures that the subject of the image remains unchanged even when temporarily out of view.
          4、 Disadvantages of Sora
          Although Sora is very powerful, OpenAI Sora has certain problems in simulating physical phenomena in complex scenes, understanding specific causal relationships, handling spatial details, and accurately describing events that change over time.
          In this video generated by Sora, we can see that the overall picture has a high degree of coherence, with excellent performance in terms of image quality, details, lighting, and color. However, when we observe carefully, we will find that the legs of the characters in the video are slightly twisted, and the movement of the steps does not match the overall tone of the picture.
          In this video, it can be seen that the number of dogs is increasing, and although the connection is very smooth during this process, it may have deviated from our initial requirements for this video.
          (1) Inaccurate simulation of physical interaction:
          The Sora model is not precise enough in simulating basic physical interactions, such as glass breakage. This may be because the model lacks sufficient examples of such physical events in the training data, or the model is unable to fully learn and understand the underlying principles of these complex physical processes.
          (2) Incorrect change in object state:
          When simulating interactions involving significant changes in object state, such as eating food, Sora may not always accurately reflect the changes. This indicates that the model may have limitations in understanding and predicting the dynamic process of object state changes.
          (3) Incoherence in long-term video samples:
          When generating long duration video samples, Sora may produce incoherent plots or details, which may be due to the model‘s difficulty in maintaining contextual consistency over long time spans.
          (4) The sudden appearance of an object:
          Objects may appear in videos for no reason, indicating that the model still needs to improve its understanding of spatial and temporal continuity.
          Here we need to introduce the concept of "world model"
          What is the world model? Let me give an example.
          In your memory, you know the weight of a cup of coffee. So when you want to pick up a cup of coffee, your brain accurately predicts how much force should be used. So, the cup was picked up smoothly. You didn‘t even realize it. But what if there happens to be no coffee in the cup? You will use a lot of force to grab a very light cup. Your hand can immediately feel something wrong. Then, you will add a note to your memory: the cup may also be empty. So, the next time you make a prediction, you won‘t be wrong. The more things you do, the more complex world models will form in your brain for more accurate prediction of the world‘s reactions. This is the way humans interact with the world: the world model.
          Videos generated with Sora may not always leave marks when bitten. It can also go wrong at times. But this is already very powerful and terrifying. Because "remember first, predict later" is the way humans understand the world. This mode of thinking is called the world model.
          There is a sentence in Sora‘s technical documentation:
          Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world
          Translated:
          Our results indicate that expanding video generation models is a promising path towards building a universal physical world simulator.
          The meaning is that what OpenAI ultimately wants to do is not a tool for "cultural videos", but a universal "physical world simulator". That is the world model, modeling the real world.
        2. Previous:Nothing
        3. Next:Made in China, we are on road, just begining.??2024/01/05
        4. 15814001449
          Hotline&Wechat
          Address: 1st Floor, No. 67, Langkou Industrial Zone, Dalang Street, Longhua District, Shenzhen
          hcuEU 日韩欧美极品在线观看_欧美成人亚洲成人_亚洲视频在线观看_日韩欧美在线看
          亚洲另类春色国产| 国产精一品亚洲二区在线视频| 日韩三级.com| 国产精品夜夜嗨| 日韩一区欧美一区| 欧美精品在线视频| 国产美女娇喘av呻吟久久| 亚洲天堂精品视频| 欧美一区二区三区色| 国产激情偷乱视频一区二区三区| 亚洲三级电影全部在线观看高清| 欧美老女人第四色| 国产传媒一区在线| 亚洲伊人色欲综合网| 精品日本一线二线三线不卡| 成人a区在线观看| 日日骚欧美日韩| 中文字幕av一区二区三区高| 欧美日韩久久一区| 国产一区二区在线观看免费| 一区二区三区四区激情| 亚洲精品一区在线观看| 色天天综合久久久久综合片| 久久99热国产| 亚洲另类一区二区| 26uuu成人网一区二区三区| 91一区二区三区在线观看| 免费人成精品欧美精品| 国产精品欧美久久久久无广告| 欧美日韩国产小视频在线观看| 风间由美一区二区三区在线观看 | 欧美韩国日本一区| 欧美日韩国产高清一区二区三区| 国产精品影视在线观看| 性做久久久久久| 久久草av在线| 亚洲免费在线电影| 久久日韩粉嫩一区二区三区| 在线视频观看一区| 福利电影一区二区三区| 美女视频一区在线观看| 日韩美女视频19| 久久一留热品黄| 在线不卡一区二区| 99在线视频精品| 精品一区二区av| 午夜视频一区二区| 中文字幕综合网| 久久精品一区二区三区四区| 欧美日韩不卡视频| 99r国产精品| 国产精品一区二区黑丝| 日本成人在线电影网| 亚洲精品自拍动漫在线| 国产农村妇女精品| 精品国产凹凸成av人导航| 欧美日韩国产首页在线观看| 色综合欧美在线| 成人动漫av在线| 国产成人在线电影| 老司机一区二区| 日韩vs国产vs欧美| 亚洲一二三区在线观看| 亚洲视频免费观看| 国产精品无圣光一区二区| 久久青草国产手机看片福利盒子 | 国产精品123| 精品一区二区免费| 蜜桃91丨九色丨蝌蚪91桃色| 性做久久久久久| 亚洲午夜激情av| 亚洲精品国产精华液| 国产精品成人一区二区艾草 | 99视频热这里只有精品免费| 国产成人无遮挡在线视频| 韩日欧美一区二区三区| 另类中文字幕网| 免费欧美在线视频| 日本欧美一区二区| 日本中文字幕一区| 天堂蜜桃91精品| 五月天婷婷综合| 性感美女极品91精品| 亚洲国产aⅴ天堂久久| 一区二区三区不卡在线观看| 亚洲精品网站在线观看| 亚洲免费观看高清在线观看| 亚洲三级在线免费观看| 成人欧美一区二区三区视频网页 | 久久精品久久99精品久久| 男男视频亚洲欧美| 免费在线视频一区| 美女看a上一区| 日韩一区二区三区四区五区六区 | 欧美日韩亚洲国产综合| 欧美精品少妇一区二区三区| 69堂亚洲精品首页| 欧美一区二区三区视频在线观看| 91精品国产色综合久久ai换脸| 91麻豆精品国产自产在线| 91麻豆精品国产91久久久使用方法 | 久久久久久久久一| 国产日韩欧美一区二区三区乱码| 久久精品一区四区| 国产精品视频看| 亚洲欧美日韩国产综合在线| 亚洲黄色性网站| 亚洲大片在线观看| 免费一区二区视频| 国产精品一二三四| av亚洲产国偷v产偷v自拍| 色视频欧美一区二区三区| 欧美在线观看禁18| 欧美精品 日韩| 精品国精品国产| 亚洲国产精品激情在线观看| 亚洲视频网在线直播| 亚洲午夜精品17c| 麻豆国产精品官网| 高清beeg欧美| 在线免费观看视频一区| 欧美精品三级日韩久久| 精品国产网站在线观看| 国产精品网站一区| 亚洲综合一区二区| 免费观看一级欧美片| 国产福利一区在线观看| 91亚洲精品一区二区乱码| 欧美日韩中文字幕一区二区| 日韩一级高清毛片| 国产欧美一区二区在线| 亚洲精品伦理在线| 蜜臀久久99精品久久久久宅男 | 欧美二区乱c少妇| 久久久综合九色合综国产精品| 一区精品在线播放| 天堂一区二区在线| 国产精品99久久久久久宅男| 一本久久精品一区二区| 日韩一级完整毛片| 国产精品免费看片| 午夜亚洲国产au精品一区二区| 国产一区二区三区香蕉| 一本到不卡免费一区二区| 日韩一区二区三区免费观看| 国产欧美精品一区| 亚洲成人一二三| 国产激情91久久精品导航| 欧美综合一区二区| 久久亚洲综合av| 一二三区精品福利视频| 国产在线播放一区三区四| 色综合久久综合网欧美综合网| 日韩欧美在线一区二区三区| 中文字幕一区二区三区不卡| 日日摸夜夜添夜夜添亚洲女人| 高清不卡一二三区| 911国产精品| 国产精品视频九色porn| 日韩高清国产一区在线| 成人综合婷婷国产精品久久| 日韩精品一区二区三区中文不卡 | 中文字幕免费观看一区| 一区av在线播放| 国产综合色精品一区二区三区| 一本一道久久a久久精品| 日韩精品一区国产麻豆| 亚洲男人的天堂av| 国产一区二区三区| 欧美日韩精品三区| 国产精品进线69影院| 蜜臀av一级做a爰片久久| 91亚洲大成网污www| 欧美va亚洲va国产综合| 亚洲小说春色综合另类电影| 成人小视频免费在线观看| 欧美一区二区三区视频| 一区二区三区四区乱视频| 国产成人自拍网| 91精品国产美女浴室洗澡无遮挡| 亚洲少妇30p| 国产精品亚洲一区二区三区在线 | 亚洲777理论| 97国产精品videossex| 久久一夜天堂av一区二区三区| 亚洲不卡一区二区三区| 91同城在线观看| 欧美国产精品中文字幕| 久久精品国产一区二区三区免费看| 色婷婷综合久久久| 日本一区二区三区高清不卡| 久久99精品久久只有精品| 欧美日本一区二区在线观看| 亚洲精品日产精品乱码不卡| 成人午夜电影小说| 久久夜色精品国产噜噜av| 蜜臀av一级做a爰片久久| 欧美人伦禁忌dvd放荡欲情| 一区二区免费在线| www.亚洲在线|

          <tr><label></label></tr>

            <sup id="P80hT"></sup>

              1. <noframes>
              2. <sup><tt></tt></sup>