Enable javascript in your browser for better experience. Need to know to enable it?

÷ÈÓ°Ö±²¥

La informaci¨®n en esta p¨¢gina no se encuentra completamente disponible en tu idioma de preferencia. Muy pronto esperamos tenerla completamente disponible en otros idiomas. Para obtener informaci¨®n en tu idioma de preferencia, por favor descarga el PDF ²¹±ç³Ü¨ª.
?ltima actualizaci¨®n : Apr 24, 2019
NO EN LA EDICI?N ACTUAL
Este blip no est¨¢ en la edici¨®n actual del Radar. Si ha aparecido en una de las ¨²ltimas ediciones, es probable que siga siendo relevante. Si es m¨¢s antiguo, es posible que ya no sea relevante y que nuestra valoraci¨®n sea diferente hoy en d¨ªa. Desgraciadamente, no tenemos el ancho de banda necesario para revisar continuamente los anuncios de ediciones anteriores del Radar. Entender m¨¢s
Apr 2019
Probar ?

is an open-source unified programming model for defining and executing both batch and streaming data parallel processing pipelines. The Beam model is based on the which allows us to express logic in an elegant way so that we can easily switch between batch, windowed batch or streaming. The big data-processing ecosystem has been evolving quite a lot which can make it difficult to choose the right data-processing engine. One of the key reasons to choose Beam is that it allows us to switch between different runners ¡ª a few months ago was added to the other runners it already supports, which include , and Google Cloud Dataflow. Different runners have different capabilities and providing a portable API is a difficult task. Beam tries to strike a delicate balance by actively pulling innovations from these runners into the Beam model and also working with the community to influence the roadmap of these runners. Beam has SDKs in multiple languages including Java, Python and Golang. We've also had success using which provides a Scala wrapper around Beam.

Nov 2018
Evaluar ?

is an open source unified programming model for defining and executing both batch and streaming data-parallel processing pipelines. Beam provides a portable API layer for describing these pipelines independent of execution engines (or runners) such as , or Google Cloud Dataflow. Different runners have different capabilities and providing a portable API is a difficult task. Beam tries to strike a delicate balance by actively pulling innovations from these runners into the Beam model and also working with the community to influence the roadmap of these runners. Beam has a rich set of that cover most of the data pipeline needs and it also provides a mechanism to implement for specific use cases. The portable API and extensible IO transformations make a compelling case for assessing Apache Beam for data pipeline needs.

Publicado : Nov 14, 2018

Suscr¨ªbete al bolet¨ªn informativo de Technology Radar

?

?

?

?

Suscr¨ªbete ahora

Visita nuestro archivo para leer los vol¨²menes anteriores