Stockwinners Market Radar for May 29, 2023 - Earnings, Upgrades downgrades, option trades, Best Stock Advisory Service |
ILMN... | Hot Stocks20:13 EDT Fly Intel: Top five weekend stock stories - Catch up on the weekend's top five stories with this list compiled by The Fly: 1. Nvidia announced a new class of large-memory AI supercomputer - an NVIDIA DGX supercomputer powered by NVIDIA GH200 Grace Hopper Superchips and the NVIDIA NVLink Switch System - created to enable the development of giant, next-generation models for generative AI language applications, recommender systems and data analytics workloads. DGX GH200 is the first supercomputer to pair Grace Hopper Superchips with the NVIDIA NVLink Switch System, a new interconnect that enables all GPUs in a DGX GH200 system to work together as one. Google Cloud (GOOGL), Meta (META) and Microsoft (MSFT) are among the first expected to gain access to the DGX GH200 to explore its capabilities for generative AI workloads. NVIDIA is building its own DGX GH200-based AI supercomputer to power the work of its researchers and development teams. Named NVIDIA Helios, the supercomputer will feature four DGX GH200 systems. NVIDIA DGX GH200 supercomputers are expected to be available by the end of the year. 2. Call of Duty, arguably the most successful videogame franchise ever, is at the center of the debate over whether Microsoft's planned $75B acquisition of Call of Duty owner Activision Blizzard (ATVI) could give it an unfair edge to dominate the videogaming industry, The Wall Street Journal's Sarah E. Needleman reports. The U.K.'s Competition and Markets Authority mentioned Call of Duty 41 times in the 20-page summary of its decision to reject the deal last month. The U.S. Federal Trade Commission cited the game 18 times in its 23-page complaint to quash the deal in December. The European Union approved the deal this month but only after Microsoft pledged to allow competitors to stream Call of Duty and other Activision games over the cloud, the author notes. 3. In the coming months, courts in the U.S. and Europe will rule whether Illumina (ILMN) must abandon its $8B acquisition of the cancer-testing company Grail, under orders of antitrust authorities, Bill Alpert writes in this week's edition of Barron's. Illumina argues that it's the victim of administrative-state overreach. Millions could die of cancer, says the company, if bureaucrats bust up the deal. To critics of executive-branch power and of the FTC's activist chair, Lina Khan, Illumina's case might be their ticket to the U.S. Supreme Court and a ruling that blunts agency power. The case could reach the high court in the next term, which starts in October. One thing makes Illumina's test case less compelling: Many shareholders want the company to lose. The stock perks up when the merger looks doubtful, and slumps when the merger appears on track, the author notes. 4. Disney's (DIS) live-action remake of "The Little Mermaid" won the Memorial Day weekend at the domestic box office with a four-day debut of $117.5M. For the three-day period, the movie grossed an estimated $95.4M. Overseas, "The Little Mermaid" earned just $68.1M from 51 markets. The live-action film also earned an A CinemaScore. 5. Nvidia, Wynn (WYNN), and Estee Lauder (EL) saw positive mentions in this week's edition of Barron's.
|
SYNA | Hot Stocks20:05 EDT Synaptics partners with Boreas Technologies on piezo haptic trackpads - Synaptics announced at Computex 2023 that it has combined its S9A0H NIST SP800-193 firmware qualified touch controller with Boreas Technologies' piezo haptics technology to deliver a high-performance trackpad module. The module provides OEMs with both force sensing and best-in-class haptic feedback while enabling thin and light trackpads with a seamless transition from a large touch-enabled area to the chassis with a consistent click sensation across 100% of the active surface. Announced last year, the S9AOH IC is an integrated hardware/software touch platform that supports the industry's highest level of firmware security-including 384-bit encryption-to prevent trackpads from being rendered inoperable. It's also scalable to meet the demand for larger, smarter, and more responsive touchpads. The module leverages the second-generation Boreas Piezo Haptic technology announced earlier this year, as well as the recently announced Boreas BOS1921 piezo driver. The driver delivers autonomous operation and sensing for piezo haptic trackpads in a single chip.
|
ACN | Hot Stocks09:35 EDT Accenture announces intent to acquire Green Domus - Accenture has announced its intent to acquire Green Domus Desenvolvimento Sustentavel LTDA, a leading Brazil-based sustainability consultancy with experience helping clients design and implement a range of sustainability services with a focus on measurable decarbonization strategies. Green Domus will join Accenture to further enhance its Sustainability Services team. Financial terms of the transaction are not being disclosed.
|
WPP NVDA | Hot Stocks09:34 EDT WPP, Nvidia partner to build AI-enabled content engine for digital advertising - Nvidia (NVDA) and WPP (WPP) announced they are developing a content engine that harnesses NVIDIA Omniverse and AI to enable creative teams to produce high-quality commercial content faster, more efficiently and at scale while staying fully aligned with a client's brand. The new engine connects an ecosystem of 3D design, manufacturing and creative supply chain tools, including those from Adobe and Getty Images, letting WPP's artists and designers integrate 3D content creation with generative AI. This enables WPP's clients to reach consumers in highly personalized and engaging ways, while preserving the quality, accuracy and fidelity of their company's brand identity, products and logos. The new content engine has at its foundation Omniverse Cloud - a platform for connecting 3D tools, and developing and operating industrial digitalization applications. This allows WPP to seamlessly connect its supply chain of product-design data from software such as Adobe's Substance 3D tools for 3D and immersive content creation, plus computer-aided design tools to create brand-accurate, photoreal digital twins of client products. WPP uses responsibly trained generative AI tools and content from partners such as Adobe and Getty Images so its designers can create varied, high-fidelity images from text prompts and bring them into scenes. This includes Adobe Firefly, a family of creative generative AI models, and exclusive visual content from Getty Images created using NVIDIA Picasso, a foundry for custom generative AI models for visual design. With the final scenes, creative teams can render large volumes of brand-accurate, 2D images and videos for classic advertising, or publish interactive 3D product configurators to NVIDIA Graphics Delivery Network, a worldwide, graphics streaming network, for consumers to experience on any web device. In addition to speed and efficiency, the new engine outperforms current methods, which require creatives to manually create hundreds of thousands of pieces of content using disparate data coming from disconnected tools and systems. The partnership with NVIDIA builds on WPP's existing leadership position in emerging technologies and generative AI, with award-winning campaigns for major clients around the world. The new content engine will soon be available exclusively to WPP's clients around the world.
|
CPG | Hot Stocks09:13 EDT Crescent Point Energy provides update on Alberta wildfires - Crescent Point Energy said it is "pleased to advise that over the past week it has brought back on stream the full 45,000 boe/d of Kaybob Duvernay production previously shut-in due to the Alberta wildfires. No damage has occurred to the company's assets." Crescent Point's 2023 guidance remains unchanged, including annual average production of 160,000 to 166,000 boe/d, despite the impact of the wildfires during second quarter. The company added it continues to monitor the wildfire situation closely and will manage potential future temporary shut-ins as required with industry partners and local authorities to ensure the safety and security of its workers and assets.
|
NVDA SFTBY | Hot Stocks07:13 EDT Nvidia collaborates with SoftBank to power next-gen data centers - Nvidia (NVDA) and SoftBank (SFTBY) announced they are collaborating on a pioneering platform for generative AI and 5G/6G applications that is based on the NVIDIA GH200 Grace Hopper Superchip and which SoftBank plans to roll out at new, distributed AI data centers across Japan. Paving the way for the rapid, worldwide deployment of generative AI applications and services, SoftBank will build data centers that can, in collaboration with Nvidia, host generative AI and wireless applications on a multi-tenant common server platform, which reduces costs and is more energy efficient. The platform will use the new NVIDIA MGX reference architecture with Arm Neoverse-based GH200 Superchips and is expected to improve performance, scalability and resource utilization of application workloads. The new data centers will be more evenly distributed across its footprint than those used in the past, and handle both AI and 5G workloads. This will allow them to better operate at peak capacity with low latency and at substantially lower overall energy costs. SoftBank is exploring creating 5G applications for autonomous driving, AI factories, augmented and virtual reality, computer vision and digital twins. NVIDIA Grace Hopper and NVIDIA BlueField-3 data processing units will accelerate the software-defined 5G vRAN, as well as generative AI applications, without bespoke hardware accelerators or specialized 5G CPUs. Additionally, the NVIDIA Spectrum Ethernet switch with BlueField-3 will deliver a highly precise timing protocol for 5G. The solution achieves breakthrough 5G speed on an NVIDIA-accelerated 1U MGX-based server design, with industry-high throughput of 36Gbps downlink capacity, based on publicly available data on 5G accelerators. Operators have struggled to deliver such high downlink capacity using industry-standard servers. NVIDIA MGX is a modular reference architecture that enables system manufacturers and hyperscale customers to quickly and cost-effectively build over a hundred different server variations to suit a wide range of AI, HPC and NVIDIA Omniverse applications. By incorporating NVIDIA Aerial software for high-performance, software-defined, cloud-native 5G networks, these 5G base stations will allow operators to dynamically allocate compute resources, and achieve 2.5x power efficiency over competing products.
|
NVDA | Hot Stocks07:09 EDT Nvidia launches accelerated ethernet platform for hyperscale Generative AI - Nvidia announced NVIDIA Spectrum-X, an accelerated networking platform designed to improve the performance and efficiency of Ethernet-based AI clouds. NVIDIA Spectrum-X is built on networking innovations powered by the tight coupling of the NVIDIA Spectrum-4 Ethernet switch with the NVIDIA BlueField-3 DPU, achieving 1.7x better overall AI performance and power efficiency, along with consistent, predictable performance in multi-tenant environments. Spectrum-X is supercharged by Nvidia acceleration software and software development kits, allowing developers to build software-defined, cloud-native AI applications. The delivery of end-to-end capabilities reduces run-times of massive transformer-based generative AI models. This allows network engineers, AI data scientists and cloud service providers to improve results and make informed decisions faster. The world's top hyperscalers are adopting NVIDIA Spectrum-X, including industry-leading cloud innovators. As a blueprint and testbed for NVIDIA Spectrum-X reference designs, Nvidia is building Israel-1, a hyperscale generative AI supercomputer to be deployed in its Israeli data center on Dell PowerEdge XE9680 servers based on the NVIDIA HGX H100 eight-GPU platform, BlueField-3 DPUs and Spectrum-4 switches. The NVIDIA Spectrum-X networking platform is highly versatile and can be used in various AI applications. It uses fully standards-based Ethernet and is interoperable with Ethernet-based stacks. The platform starts with Spectrum-4, the world's first 51Tb/sec Ethernet switch built specifically for AI networks. Advanced RoCE extensions work in concert across the Spectrum-4 switches, BlueField-3 DPUs and NVIDIA LinkX optics to create an end-to-end 400GbE network that is optimized for AI clouds. NVIDIA Spectrum-X enhances multi-tenancy with performance isolation to ensure tenants' AI workloads perform optimally and consistently. It also offers better AI performance visibility, as it can identify performance bottlenecks and it features completely automated fabric validation. Acceleration software driving Spectrum-X includes powerful NVIDIA SDKs such as Cumulus Linux, pure SONiC and NetQ - which together enable the networking platform's extreme performance. It also includes the NVIDIA DOCA software framework, which is at the heart of BlueField DPUs. NVIDIA Spectrum-X enables unprecedented scale of 256 200Gb/s ports connected by a single switch, or 16,000 ports in a two-tier leaf-spine topology to support the growth and expansion of AI clouds while maintaining high levels of performance and minimizing network latency. NVIDIA Spectrum-X, Spectrum-4 switches, BlueField-3 DPUs and 400G LinkX optics are available now.
|
NVDA | Hot Stocks06:56 EDT Nvidia unveils NVIDIA MGX server specification - To meet the diverse accelerated computing needs of the world's data centers, Nvidia unveiled the NVIDIA MGX server specification, which provides system manufacturers with a modular reference architecture to quickly and cost-effectively build more than 100 server variations to suit a wide range of AI, high performance computing and Omniverse applications. ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT and Supermicro will adopt MGX, which can slash development costs by up to three-quarters and reduce development time by two-thirds to just six months. With MGX, manufacturers start with a basic system architecture optimized for accelerated computing for their server chassis, and then select their GPU, DPU and CPU. Design variations can address unique workloads, such as HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation. Multiple tasks like AI training and 5G can be handled on a single machine, while upgrades to future hardware generations can be frictionless. MGX can also be easily integrated into cloud and enterprise data centers. QCT and Supermicro will be the first to market, with MGX designs appearing in August. Supermicro's ARS-221GL-NR system, announced today, will include the NVIDIA Grace CPU Superchip, while QCT's S74G-2U system, also announced, will use the NVIDIA GH200 Grace Hopper Superchip. Additionally, SoftBank Corp. plans to roll out multiple hyperscale data centers across Japan and use MGX to dynamically allocate GPU resources between generative AI and 5G applications. In addition to hardware, MGX is supported by Nvidia's full software stack, which enables developers and enterprises to build and accelerate AI, HPC and other applications. This includes NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, which features over 100 frameworks, pretrained models and development tools to accelerate AI and data science for fully supported enterprise AI development and deployment. MGX is compatible with the Open Compute Project and Electronic Industries Alliance server racks, for quick integration into enterprise and cloud data centers.
|
NVDA... | Hot Stocks06:53 EDT Nvidia announces DGX GH200 AI supercomputer - Nvidia announced a new class of large-memory AI supercomputer - an NVIDIA DGX supercomputer powered by NVIDIA GH200 Grace Hopper Superchips and the NVIDIA NVLink Switch System - created to enable the development of giant, next-generation models for generative AI language applications, recommender systems and data analytics workloads. The NVIDIA DGX GH200's massive shared memory space uses NVLink interconnect technology with the NVLink Switch System to combine 256 GH200 superchips, allowing them to perform as a single GPU. This provides 1 exaflop of performance and 144 terabytes of shared memory - nearly 500x more memory than the previous generation NVIDIA DGX A100, which was introduced in 2020. GH200 superchips eliminate the need for a traditional CPU-to-GPU PCIe connection by combining an Arm-based NVIDIA Grace CPU with an NVIDIA H100 Tensor Core GPU in the same package, using NVIDIA NVLink-C2C chip interconnects. This increases the bandwidth between GPU and CPU by 7x compared with the latest PCIe technology, slashes interconnect power consumption by more than 5x, and provides a 600GB Hopper architecture GPU building block for DGX GH200 supercomputers. DGX GH200 is the first supercomputer to pair Grace Hopper Superchips with the NVIDIA NVLink Switch System, a new interconnect that enables all GPUs in a DGX GH200 system to work together as one. The previous-generation system only provided for eight GPUs to be combined with NVLink as one GPU without compromising performance. The DGX GH200 architecture provides 48x more NVLink bandwidth than the previous generation, delivering the power of a massive AI supercomputer with the simplicity of programming a single GPU. Google Cloud (GOOGL), Meta (META) and Microsoft (MSFT) are among the first expected to gain access to the DGX GH200 to explore its capabilities for generative AI workloads. Nvidia also intends to provide the DGX GH200 design as a blueprint to cloud service providers and other hyperscalers so they can further customize it for their infrastructure. NVIDIA is building its own DGX GH200-based AI supercomputer to power the work of its researchers and development teams. Named NVIDIA Helios, the supercomputer will feature four DGX GH200 systems. Each will be interconnected with NVIDIA Quantum-2 InfiniBand networking to supercharge data throughput for training large AI models. Helios will include 1,024 Grace Hopper Superchips and is expected to come online by the end of the year. DGX GH200 supercomputers include NVIDIA software to provide a turnkey, full-stack solution for the largest AI and data analytics workloads. NVIDIA Base Command software provides AI workflow management, enterprise-grade cluster management, libraries that accelerate compute, storage and network infrastructure, and system software optimized for running AI workloads. Also included is NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform. It provides over 100 frameworks, pretrained models and development tools to streamline development and deployment of production AI including generative AI, computer vision, speech AI and more. NVIDIA DGX GH200 supercomputers are expected to be available by the end of the year. Watch Huang discuss NVIDIA DGX GH200 supercomputers during his keynote address at COMPUTEX.
|
NVDA... | Hot Stocks06:46 EDT Nvidia GH200 Grace Hopper Superchip enters full production - Nvidia (NVDA) announced that the NVIDIA GH200 Grace Hopper Superchip is in full production, set to power systems coming online worldwide to run complex AI and HPC workloads. The GH200-powered systems join more than 400 system configurations powered by different combinations of NVIDIA's latest CPU, GPU and DPU architectures - including NVIDIA Grace, NVIDIA Hopper, NVIDIA Ada Lovelace and NVIDIA BlueField - created to help meet the surging demand for generative AI. Global hyperscalers and supercomputing centers in Europe and the U.S. are among several customers that will have access to GH200-powered systems. Taiwan manufacturers are among the many system manufacturers worldwide bringing to market a wide variety of systems powered by different combinations of NVIDIA accelerators and processors. These include AAEON, Advantech, Aetina, ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Tyan, Wistron and Wiwynn. Additionally, global server manufacturers Cisco (CSCO), Dell Technologies (DELL), Hewlett Packard Enterprise (HPE), Lenovo, Supermicro and Eviden, an Atos company, offer a broad array of NVIDIA-accelerated systems. Cloud partners for NVIDIA H100 include Amazon Web Services (AMZN), Cirrascale, CoreWeave, Google Cloud, Lambda, Microsoft Azure, Oracle Cloud Infrastructure, Paperspace and Vultr. NVIDIA L4 GPUs are generally available on Google Cloud (GOOGL). The coming portfolio of systems accelerated by the NVIDIA Grace, Hopper and Ada Lovelace architectures provides broad support for the NVIDIA software stack, which includes NVIDIA AI, the NVIDIA Omniverse platform and NVIDIA RTX technology. NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, offers over 100 frameworks, pretrained models and development tools to streamline development and deployment of production AI, including generative AI, computer vision and speech AI. The NVIDIA Omniverse development platform for building and operating metaverse applications enables individuals and teams to work across multiple software suites and collaborate in real time in a shared environment. The platform is based on the Universal Scene Description framework, an open, extensible 3D language for virtual worlds. The NVIDIA RTX platform fuses ray tracing, deep learning and rasterization to fundamentally transform the creative process for content creators and developers with support for industry-leading tools and APIs. Applications built on the RTX platform bring the power of real-time photorealistic rendering and AI-enhanced graphics, video and image processing to enable millions of designers and artists to create their best work. Systems with GH200 Grace Hopper Superchips are expected to be available beginning later this year.
|
NVDA | Hot Stocks06:41 EDT Nvidia announced NVIDIA Avatar Cloud Engine for Games - Nvidia announced NVIDIA Avatar Cloud Engine for Games, a custom AI model foundry service that transforms games by bringing intelligence to non-playable characters through AI-powered natural language interactions. Developers of middleware, tools and games can use ACE for Games to build and deploy customized speech, conversation and animation AI models in their software and games. Building on NVIDIA Omniverse, ACE for Games delivers optimized AI foundation models for speech, conversation and character animation, including: NVIDIA NeMo - for building, customizing and deploying language models, using proprietary data. The large language models can be customized with lore and character backstories, and protected against counterproductive or unsafe conversations via NeMo Guardrails. NVIDIA Riva - for automatic speech recognition and text-to-speech to enable live speech conversation. NVIDIA Omniverse Audio2Face - for instantly creating expressive facial animation of a game character to match any speech track. Audio2Face features Omniverse connectors for Unreal Engine 5, so developers can add facial animation directly to MetaHuman characters. Developers can integrate the entire NVIDIA ACE for Games solution or use only the components they need. Nvidia collaborated with Convai, an NVIDIA Inception startup, to showcase how developers will soon be able to use NVIDIA ACE for Games to build NPCs. Convai, which is focused on developing cutting-edge conversational AI for virtual game worlds, integrated ACE modules into its end-to-end real-time avatar platform. The neural networks enabling NVIDIA ACE for Games are optimized for different capabilities, with various size, performance and quality trade-offs. The ACE for Games foundry service will help developers fine-tune models for their games, then deploy via NVIDIA DGX Cloud, GeForce RTX PCs or on premises for real-time inferencing. The models are optimized for latency - a critical requirement for immersive, responsive interactions in games.
|
CDNS | Hot Stocks06:26 EDT Cadence Design expands collaboration with Arm - Cadence Design announced it has continued to expand its collaboration with Arm to advance mobile device silicon success, providing customers with a faster path to tapeout through use of Cadence digital and verification tools and the new Arm Total Compute Solutions 2023, which includes the Arm Cortex-X4, Arm Cortex-A720 and Cortex-A520 CPUs and Immortalis-G720, Mali-G720 and Mali-G620 GPUs. Through this latest collaboration, Cadence delivered comprehensive RTL-to-GDS digital flow Rapid Adoption Kits for 3nm and 5nm nodes to help customers achieve power and performance goals using the new Arm TCS23. The Cadence tools optimized for the new Arm TCS23 include the Cadence Cerebrus Intelligent Chip Explorer, Genus Synthesis Solution, Modus DFT Software Solution, Innovus Implementation System, Quantus Extraction Solution, Tempus Timing Signoff Solution and ECO Option, Voltus IC Power Integrity Solution, Conformal Equivalence Checking and Conformal Low Power. Cadence Cerebrus provided Arm with AI-driven design optimization capabilities that resulted in 50% better timing, a 10% reduction in cell area and 27% improved leakage power on the Cortex-X4 CPU, empowering Arm to achieve power, performance and area (PPA) targets faster. The digital RAKs provide Arm TCS23 users with several key benefits. For example, the AI-driven Cadence Cerebrus automates and scales digital chip design, providing customers with improved productivity versus a manual, iterative approach. Cadence iSpatial technology provides an integrated implementation flow, offering improved predictability and PPA, which ultimately leads to faster design closure. The RAKs also incorporate an innovative smart hierarchy flow that enables accelerated turnaround times on large, high-performance CPUs. The Tempus ECO Option, which provides path-based analysis, is integrated into the flow for signoff-accurate, final design closure. Finally, the RAKs utilize the GigaOpt activity-aware power optimization engine, which is incorporated with the Innovus Implementation System and the Genus Synthesis Solution to dramatically reduce dynamic power consumption. Arm used the Cadence verification flow to validate the Cortex-X4, Cortex-A720 and Cortex-A520 CPU-based and Immortalis-G720, Mali-G720 and Mali-G620 GPU-based mobile reference platforms. The Cadence verification flow supports Arm TCS23 and includes the Cadence Xcelium Logic Simulation Platform, Palladium Z1 and Z2 Enterprise Emulation Platforms, Helium Virtual and Hybrid Studio, Jasper Formal Verification Platform and Verisium Manager Planning and Coverage Closure tools. The Cadence verification flow lets Arm TCS23 users improve overall verification throughput and leverage advanced software debug capabilities. Cadence also validated that Cadence Perspec System Verifier, VIP and System VIP tools all support TCS23-based designs to enable customers to accelerate time to market when assembling TCS23-based SoCs. In addition, the virtual and hybrid platform reference designs include the Arm Fast Models to enable early software development and verification through the Cadence Helium Studio as well as the Cadence Palladium and Protium platforms, also known as the dynamic duo.
|
SNPS | Hot Stocks06:23 EDT Synopsys strengthens AI-enhanced collaboration with Arm - Tackling extremely complex mobile chip designs on advanced nodes down to 2nm, Synopsys has strengthened its AI-enhanced design collaboration with Arm as the company announces Arm Total Compute Solutions 2023 platform at Computex. Comprehensive EDA and IP solutions optimized for the highest levels of performance and power for Arm's latest compute platform includes the Synopsys.ai full-stack AI-driven EDA suite, Synopsys Interface and Security IP and Synopsys Silicon Lifecycle Management PVT IP. These advancements build on decades of collaboration between the two companies to accelerate customers' delivery of high-performance, efficient Arm-based SoCs for high-end smartphones and virtual/augmented-reality applications.
|
NTES | Hot Stocks06:19 EDT NetEase Games unveils new multifunctional studio PinCool - NetEase Games has announced the establishment of a new game studio named PinCool as part of its NetEase Games division. Based in Tokyo, Japan, PinCool is a studio comprised of industry experts with extensive experience in different facets of the interactive entertainment field, including video games, movie creatives, live events and IP licensing. PinCool is led by Representative Director & President Ryutaro Ichimura, a 20-year veteran of the games industry and longtime producer of the Dragon Quest franchise. In addition, Takashi Ogura will join PinCool as a board member. PinCool plans to leverage its diverse knowledge as an entertainment production company to provide high-quality entertainment experiences to users worldwide. While it will mainly focus on developing titles for game consoles, the company will also be involved in planning and producing a range of additional forms of entertainment.
|